Do NOT do this if you live in a densely populated area (e.g. apartment complex). You'll create noise for yourself and everybody else. Classic prisoner's dilemma - a few people could be assholes and profit from it, but if everyone's an asshole everybody suffers.
General rule on TX power: start on low and increase only if you know (or can confirm) it helps. Go back down if it doesn't.
The 6GHz space isn’t even competing with classic WiFi. It’s really fine. There’s no prisoner’s dilemma or some moral high ground from setting it to low. It will make virtually no difference for your neighbors.
The real world difference is actually pretty minimal between power settings.
The actual risk with modern hardware is that the high power setting starts running the power amplifier in a higher distortion area of the curve which degrades signal quality in exchange for incrementally longer range.
Also the reason it makes such an enormous difference to put your AP in the same room, if at all possible. Sneak a cable somewhere, park the AP in the far corner of the room, sure. But with zero walls in between, it's huge.
The people reading this are techies. Nobody else will do this. Either it should be built into the protocol, or the advice should be abandoned.
The old tales about interfering with your neighbors, prisoners dilemmas, and claiming moral high ground from setting it to low is old school WiFi mythology that continues to be parroted around
In other words: you don't need carrier sensing to work if you're not getting drowned in noise to begin with.
There is no such problem as "you have to shout enough so the others hear that you're there". There's no such thing, by at least 2 different vectors. 1, They hear everyone just fine, weak and strong, all at the same time. 2, It doesn't matter even if they didn't, because you obviously hear them if you're getting clobbered by them, and so your router can channel hop around them even if they don't channel hop around you.
1. It's the AP that has to decide to change channel, and if you live somewhere with channel contention, from its perspective all channels will be busy. At that point, if your channel appears the quietest (either by being the least noisy or by your clients not being active), then the AP will decide to clobber your channel. Their WiFi devices may also not hear you and won't back off to give your airtime, even though you hear theirs and give them airtime.
2. Having your AP change channel (note: channel hopping is something else entirely, which isn't used for WiFi) wouldn't help when all channels are busy. As long as your usage appears quiet, other APs will keep moving on top of you during their channel optimization.
For residential, the only solution is to use technology that cannot propagate to neighbors. 5/6GHz and many APs, and good thick walls (mmm, reinforced concrete). WiFi channels is a solution to make a few bits of equipment coexist in the same space, but is of limited use when it comes to segregating your space from that of your neighbors. Especially if you want good performance, as there's very few wide channels available.
Find quietest 20mhz available on 5 or 6 GHz. It’ll be far more reliable than trying to battle someone over the 320.
I live in a very dense part of Chicago. 2.4 and 5 are a minefield, just a thick soup of interference on everything but the DFS channels (which I get kicked off of too often being close to two airports). While it could be that zero neighbors have 6E or 7 equipment, I find that hard to believe, but nothing comes up on the scan.
If wifi becomes a pain within a shared building then seriously consider ethernet. Slimline stick on trunking will hide the wires at about £1-2/m. A box of CAT6, solid core is less than £1/m. You will also need some back boxes, modules and face plates (~£2.50 each) and a POST tool (fiver?) Or you can try and bodge RJ45 plugs onto the solid core CAT6 - please don't unless you really know what you are doing: it looks messy and is seriously prone to weird failures.
In my case, I forgot I had to change encryption type to associate at higher speeds.
It doesn't _really_ seem to matter what channel width or frequency I use, I tend to get around 600Gbps from my iPhone (17, pro).
When you make it a point to ensure you're on the correct AP, line of sight from a few feet away, sometimes I break 1Gbps. I was surprised, watching TV the other day, to randomly get a 1.2Gbps speedtest which is one of the faster ones I've seen on WiFi.
(10gbps internet, UDM Pro, UDM enterprise 2.5Gbps switch for clients, PoE WiFi 7 APs on 6ghz).
Honestly, I'd say overall 6ghz has been more trouble than it's worth. Flipping the switch to WPA2/3 as required by 6ghz broke _all_ of my clients last year, so I had to revert and now I just have a separate SSID for clients I have the energy to manually retype the password into. 6Ghz pretty much only works line of sight and from a handful of feet away. There were bugs last year in Apple's "Disable 6e" setting so it kept re-enabling itself. MLO was bad, so it would stick to 6ghz even when there was basically no usable signal.
Over the course of the past year, it's gotten pretty tolerable, but sometimes I still wonder why I bother-- I'm pretty sure my real world performance would be better if I just turned 6ghz off again.
I haven't experienced any issues with 6ghz enabled, although honestly there isn't much noticeable benefit on an iPhone either in real-world usage. MLO was causing some issues for my non-WiFi 7 Apple devices - since WiFi credentials are sync'd in iCloud, I found that my laptop was joining the MLO network even though I never explicitly told it to - so I have disabled MLO.
I just tested 1.3Gbps through some reinforced concrete on Wi-Fi 6, no line of sight.
Is all that tinkering really needed?
Even the shittiest consumer WiFi will generally give a satisfactory speed test result with decent speeds, despite being completely unusable for anything real-time like video conferencing, Remote Desktop or gaming. Your random high-speed result may very well be down to luck and doesn’t represent how stable and usable the connection will be.
In fact what the author does here (crank up the channel width, etc) might do for a good speed test result but will start dropping out with terrible latency spikes and jitter the second he turns away from his WiFi AP.
Smaller channel widths are generally preferable as they provide a smaller top speed but said speed will be much more stable.
!
What kind of magic iPhone you have? I don't think there is any device to achieve anything close to that today[1]
---
[1] The recently(2024) record is claimed to be at 938 Gbps but it is only to a 12cm distance[2]
[2] https://discovery.ucl.ac.uk/id/eprint/10196331/1/938nbspGb_s...
All? Really?
> and now I just have a separate SSID for clients I have the energy to manually retype the password into
Type it once and it will be saved, as has been the case for years.
https://arstechnica.com/tech-policy/2025/07/trump-and-congre...
FCC enforcement for interference can work for occasional troublemakers but there’s no way they can go after every single consumer who (most likely not even realizing it) bought a 6Ghz-capable router that is encroaching on the now-privatized frequency band.
What do these devices do that can't be accomplished by an OpenWrt One + an external AP for less money and fully FOSS?
Another option would be a mini-PC running Linux, but it's perhaps overkill for a domestic router.
Edit: Actually the OpenWrt One does have built-in WiFi, so you don't even need the external AP.
Nice UI (as the company is best known for https://ui.com)
We have tested WiFi-7 gear in our lab: from the cheapest TP Omada EAP783 to the latest most expensive Cisco AP+Controller.
Our findings:
- Driver quality from the modems is still below average on Linux. If you want to test Wifi-7 go with the Intel BE200 card - most stuff works there. Warning: this card does not work with AMD CPUs.
- We have seen quite a bit of problems from Qualcomm and Mediatek cards. Either latency issues, weirdo bugs on 6GHz (not showing all SSIDs) or throughput problems
- Always go with the latest kernel with the freshest firmware blobs
- MLO is difficult to get running properly. Very buggy from all sides. Also needs the latest version of wpa_supplicant - otherwise it will not come up. And be aware: there are several MLO modes and not all of them offer "two links for twice the bandwidth".
Also expect to hit problems from AP side. If you read the TP Omada firmware changelogs you see that they are still struggling with a lot of basic functionality. So keep them updated to the latest beta versions too.
I use a Qualcomm QCNCM865 in my privat setup with an AMD CPU. Feels like the latest firmware blobs and kernel drivers brought stability into their components.
what causes that? (I have no idea how wifi cards work)
But I can confirm that the Intel BE200 works with the popular Intel n100/n305 Mini Computers.
Can you elaborate on this? I don't know much about WiFi so I'm curious what CPU work the router needs to do and what wouldn't be offloaded to hardware somehow (like most routing/forwarding/QoS duties can be).
You need to ensure the server is able to send the test data quickly enough so that the network link becomes the bottleneck.
In his case he was running the test server on the router, and the router’s CPU was unable to churn out the data quickly enough to actually saturate the network link (most network equipment does the network switching/routing/NAT in hardware and so doesn’t actually come equipped with a CPU that is capable of line-rate TCP because it’s not actually needed in normal operation).
Still beats Wi-Fi by a mile so I'm not complaining.
In simple terms, far away = more work to communicate = more airtime = less throughput.
It probably only matters with multiple devices.
I also sometimes have alerts saying more than one device is using the same IP address (DHCP issues) but it won't tell me which ones! At least give me the MAC addresses!
Unifi's stuff is great, but the software is sometimes infuriating.
You are right about Unifi's software being pain and I love that they keep changing the UI, the controller on the server side is dependency hell, and mongodb to boot just in case you need to manage n^webscale deployments.
IDS is probably overkill for a home network anyway.
I recently replaced said router with a Dream Router 7.
Guess I need to do some debugging of my own
It's been a problem for _years_. Basically the wifi card switches to another channel to see if anyone wants to do airdrop every so often. It's a bit of a joke to be honest that Apple still haven't fixed this.
"Silicon Valley of Europe", my a*s.
32GB isn't very big these days. In terms of cost, a decent cheeseburger costs more than a 32GB flash card does.
A few months ago I needed a friend to send me a 32GB file. This took over 8 hours to accomplish with his 10Mbps upstream. 8 hours! I felt like it was 1996 again and I was downloading Slackware disksets with a dialup modem.
We needed to set up a resumable way to get his computer to send that file to my computer, and be semi-formal about it because 8 hours presents a lot of time for stuff to break.
But if we had gigabit speeds, instead? We could have moved that file in less than 5 minutes. That'd have been no big deal, with no need to be formal at all: If a 5-minute file transfer dies for some reason, then it's simple enough to just start it over again.
Because it's a utility and there's a wide world of use cases out there.
For electrical maybe someone wants to charge an electric car fully overnight, or use a welder in their garage. Or use some big appliance in their kitchen.
For Internet maybe they make videos, games or other types of data-heavy content and need to be able to upload and download it.
It wasn't that long ago "internet" at home is literally just one person using it.
- Games (400GB for Ark, 235GB for Call of Duty, 190GB for God of War)
- LLMs (e.g. DeepSeek-V3.2-Exp at 690GB or Kimi-K2 at 1030GB unquantized)
- Blockchains (Bitcoin blockchain approaching 700GB)
- Deep learning datasets (1.1PB for Anna's Archive, 240TB for LAION-5B at low resolution)
- Backups
- Online video processing/storage
- Piracy (Torrenting)
Of course you can download those things on a slower connection, but I imagine that it would be a lot nicer if it went faster.
Ark is a strange case. It compresses very very well. Most of it ends up with compression ratios of around 80%.
> Total size on disk is 628.32 GiB and total download size is 171.42 GiB.
From SteamDB's summary of Ark's content depots.
This still doesn’t guarantee however that you will achieve this speed to any random host on the internet - their pipe to Cloudflare/Netflix may very well be fat and optimized but it doesn’t guarantee their pipe to a random small hosting provider doesn’t go over a 56k modem somewhere (I jest.. but only a bit).
Test to where you want to exchange high speed traffic.
Or even just work stuff, I've had to shift around several TB of 3D assets for my job while working from home.
Or they seed large datasets for other researchers.
Why not? Life’s too short anyways, and playing around with tech is one of those things that bring me joy.
Some people can manage with slow network speeds at home, even though 100 Gbps single mode fiber is perfectly doable nowadays. And it's reasonable, because new SSDs do almost 120 Gbps.
1 Gbps made sense 20 years ago when single hard disks had similar performance. For some weird reason LAN speeds did not improve at the same rate as the disks did.
But then again, I guess many could also still manage with 100 Mbps connectivity at home. Still enough for 4k video, web browsing and most other "ordinary" use cases.
When it comes to wired, sending data 15cm is a very different problem than sending it 100M reliably - that and consumer demand for >1Gbps wasn't there which made the consumer equipment expensive because no mass market to drive it down, M.2 entirely removes the cable.
I figured 10Gbps would be the standard by now (and was way off) and yet its not even the default on high end motherboards - 2.5Gbps is becoming a lot more common though.
All the new MacBook Pros come with 64Gbps wired networking.
With an adapter you can also connect 100GbE, but that’s not very special.
It is very slowly improving, but by far the fastest widely used services I've seen are a few gacha games and Steam both downloading their updates. Which is rather sad.
Windows Update is slow, macOS update is abysmally slow, both iOS and Android stores also bottleneck somewhere. Most cloud storage services are just as bad. Most of these can't even utilise half a gigabit efficiently.
Actual speed on a 1Gbit port is something like 940Mbps according to experience (I believe the theoretical max there is 970).
My typical speed test results are around 104Mb/s. Before being upgraded, on the 50Mb/s package I was getting 52Mb/s.
My suspicion is that fibre network operator (OpenServe in South Africa) applies rate limits which are technically a little above what their customers are paying for, perhaps to avoid complaints from people who don’t understand overheads.
The poster above is claiming to see a physically impossible speed on 1gig networking.
And on that ISP side of things, it's a software-defined limit; it's just a field in a database or a config file that can be tuned to be whatever they want it to be.
But the fellow up there says that they got 1.2Gbps through a Mikrotik Hex S: https://mikrotik.com/product/hex_s
And that's just not possible*. The E60iUGS Mikrotik Hex S's own hardware Ethernet interfaces are 1000BASE-T, and it's simply not possible to squeeze more than 1.0Gbps through a 1000BASE-T interface. (It does also have an SFP port that it has one of is branded as "1.25Gbps," but reality is that it, too, is limited to no more than 1.0Gbps of data transfer.)
*: Except... the 2025 version of the Hex S, E60iUGS, does have a 2.5Gbps SFP port that could be used as an ISP connection, and a much-improved internal fabric compared to the previous version. But the rest of its ports are just 1Gbps, which suggests a hard 1Gbps limit for any single connected LAN device.
Except... Mikrotik's RouterOS allows hardware to be configured in many, many ways -- including using LACP to aggregate ports together. With the 2025 Hex S, an amalgamation could be created that would allow a single client computer to get >1Gbps from an ISP. It might even be possible to be similarly-clever with the previous version of the Hex S. But neither version will be able to do end-to-end >1Gbps without very deliberate and rather unusual effort.