On X, we had Xorg and that is it. But at least Xorg did a lot of the work for you.
On Wayland, you in theory have to do a lot more of the work yourself when you build a compositor. But what we are seeing is libraries emerge that do this for you (wlroots, Smithay, Louvre, aquamarine, SWC, etc). So we have this one man project expecting to deliver a dev release in just a few months (mid-2026 is 4 months from now).
But it is not just that we have addressed the Wayland objection. This project was able to evaluate alternatives and decide the smithay is the best fit both for features and language choice. As time goes on, we will see more implementations that will compete with each other on quality and features. This will drive the entire ecosystem forward. That is how Open Source is supposed to work.
I wonder how strictly they interpret behavior here given the architectural divergence?
As an example, focus-stealing prevention. In xfwm4 (and x11 generally), this requires complex heuristics and timestamp checks because x11 clients are powerful and can aggressively grab focus. In wayland, the compositor is the sole arbiter of focus, hence clients can't steal it, they can only request it via xdg-activation. Porting the legacy x11 logic involves the challenge of actually designing a new policy that feels like the old heuristic but operates on wayland's strict authority model.
This leads to my main curiosity regarding the raw responsiveness of xfce. On potato hardware, xfwm4 often feels snappy because it can run as a distinct stacking window manager with the compositor disabled. Wayland, by definition forces compositing. While I am not concerned about rust vs C latency (since smithay compiles to machine code without a GC), I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
Naturally these kinds of having a language island create some attrition regarding build tooling, integration with existing ecosystem and who is able to contribute to what.
So lets see how it evolves, even with my C bashing, I was a much happier XFCE user than with GNOME and GJS all over the place.
It is not the performance bottleneck people seem to believe.
Implementation matters, including proper use of JIT/AOT toolchains.
https://gitlab.xfce.org/xfce/xfwm4/-/blob/master/settings-di...
I think this is ultimately correct. The compositor will have to render a frame at some point after the VBlank signal, and it will need to render with it the buffers on-screen as of that point, which will be from whatever was last rendered to them.
This can be somewhat alleviated, though. Both KDE and GNOME have been getting progressively more aggressive about "unredirecting" surfaces into hardware accelerated DRM planes in more circumstances. In this situation, the unredirected planes will not suffer compositing latency, as their buffers will be scanned out by the GPU at scanout time with the rest of the composited result. In modern Wayland, this is accomplished via both underlays and overlays.
There is also a slight penalty to the latency of mouse cursor movement that is imparted by using atomic DRM commits. Since using atomic DRM is very common in modern Wayland, it is normal for the cursor to have at least a fraction of a frame of added latency (depending on many factors.)
I'm of two minds about this. One, obviously it's sad. The old hardware worked perfectly and never had latency issues like this. Could it be possible to implement Wayland without full compositing? Maybe, actually. But I don't expect anyone to try, because let's face it, people have simply accepted that we now live with slightly more latency on the desktop. But then again, "old" hardware is now hardware that can more often than not, handle high refresh rates pretty well on desktop. An on-average increase of half a frame of latency is pretty bad with 60 Hz: it's, what, 8.3ms? But half a frame at 144 Hz is much less at somewhere around 3.5ms of added latency, which I think is more acceptable. Combined with aggressive underlay/overlay usage and dynamic triple buffering, I think this makes the compositing experience an acceptable tradeoff.
What about computers that really can't handle something like 144 Hz or higher output? Well, tough call. I mean, I have some fairly old computers that can definitely handle at least 100 Hz very well on desktop. I'm talking Pentium 4 machines with old GeForce cards. Linux is certainly happy to go older (though the baseline has been inching up there; I think you need at least Pentium now?) but I do think there is a point where you cross a line where asking for things to work well is just too much. At that point, it's not a matter of asking developers to not waste resources for no reason, but asking them to optimize not just for reasonably recent machines but also to optimize for machines from 30 years ago. At a certain point it does feel like we have to let it go, not because the computers are necessarily completely obsolete, but because the range of machines to support is too wide.
Obviously, though, simply going for higher refresh rates can't fix everything. Plenty of laptops have screens that can't go above 60 Hz, and they are forever stuck with a few extra milliseconds of latency when using a compositor. It is unideal, but what are you going to do? Compositors offer many advantages, it seems straightforward to design for a future where they are always on.
I think I know what "frame perfect" means, and I'm pretty sure that you've been able to get that for ages on X11... at least with AMD/ATi hardware. Enable (or have your distro enable) the TearFree option, and there you go.
I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
True triple buffering doesn't add one frame of latency, but since it enforces only whole frames be sent to the display instead of tearing, it can cause partial frames of latency. (It's hard to come up with a well-defined measure of frame latency when tearing is allowed.)
But there have been many systems that abused the term "triple buffering" to refer to a three-frame queue, which always does add unnecessary latency, making it almost always the wrong choice for interactive systems.
well, the answer is just no, wayland has been consistently slower than X11 and nothing running on top can't really go around that
Wayland is a specification, it has an inability to be "faster" than other options. That's like saying JSON is 5% slower than Word.
And as for the implementations being slower than X, that also doesn't reflect reality.
Personally, I'm a big proponent of Wayland and not big Rust detractor, so I don't see any problem with this. I do, however, wonder how many long-time XFCE fans and the folks who donated the money funding this will feel about it. To me the reasoning is solid: Wayland appears to be the future, and Rust is a good way to help avoid many compositor crashes, which are a more severe issue in Wayland (though it doesn't necessarily need to be fatal, FWIW.) Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Of course, if they made the right choice, it should be apparent in relatively short order, so I wish them luck.
Very long time (since 2007) XFCE user here. I don't think this is accurate. We want things to "just work" and not change for no good reason. Literally no user cares what language a project is implemented in, unless they are bored and enjoy arguing about random junk on some web forum. Wayland has the momentum behind it, and while there will be some justified grumbling because change is always annoying, the transition will happen and will be fairly painless as native support for it continues to grow. The X11 diehards will go the way of the SysV-init diehards; some weird minority that likes to scream about the good old days on web forums but really no one cares about.
There are good reasons to switch to Wayland, and I trust the XFCE team to handle the transition well. Great news from the XFCE team here, I'm excited for them to pull this off.
> The X11 diehards will go the way of the SysV-init diehard
I hope you are not conflating anti-systemD people with SysV init diehards? As far as I can see very few people want to keep Sysv init, but there are lots who think SystemD init is the wrong replacement, and those primarily because its a lot more than an init system.
In many ways the objects are opposite. People hate system D for being more than init, people hate Wayland for doing less than X.
Edit: corrected "Wayland" to "XFCE" in first sentence!
Systemd is creating the same kind of monolith monoculture that Xorg represented. Wayland is far more modular.
Regardless of your engineering preferences, rejecting change is the main reason to object to both.
It would have been much easier and cost-effective to use wlroots, which has a solid base and has ironed out a lot of problems. On the other hand, Cosmic devs are actively working on it, and I can see it getting better gradually, so you get some indirect manpower for free.
I applaud the choice to not make another core Wayland implementation. We now have Gnome, Plasma, wlroots, weston, and smithay as completely separate entities. Dealing with low-level graphics is an extremely difficult topic, and every implementor encounters the same problems and has to come up with independent solutions. There's so much duplicated effort. I don't think people getting into it realize how deceptively complex and how many edge-cases low-level graphics entails.
I upvoted your general response but this line was uncalled for. No need to muddy the waters about X11 -> Wayland with the relentlessly debated, interminable, infernal init system comparison.
I think this is true but also maybe not true at the same time.
For one thing, programming languages definitely come with their own ecosystems and practices that are common.
Sometimes, programming languages can be applied in ways that basically break all of the "norms" and expectations of that programming language. You can absolutely build a bloated and slow C application, for example, so just using C doesn't make something minimal or fast. You can also write extremely reliable C code; sqlite is famously C after all, so it's clearly possible, it just requires a fairly large amount of discipline and technical effort.
Usually though, programs fall in line with the norms. Projects written in C are relatively minimal, have relatively fewer transitive dependencies, and are likely to contain some latent memory bugs. (You can dislike this conclusion, but if it really weren't true, there would've been a lot less avenues for rooting and jailbreaking phones and other devices.)
Humans are clearly really good at stereotyping, and pick up on stereotypes easily without instruction. Rust programs have a certain "feel" to them; this is not delusion IMO, it's likely a result of many things, like the behaviors of clap and anywho/Rust error handling leaking through to the interface. Same with Go. Even with languages that don't have as much of a monoculture, like say Python or C, I think you can still find that there are clusters of stereotypes of sorts that can predict program behavior/error handling/interfaces surprisingly well, that likely line up with specific libraries/frameworks. It's totally possible to, for example, make a web page where there are zero directly visible artifacts of what frameworks or libraries were used to make it. Yet despite that, when people just naturally use those frameworks, there are little "tells" that you can pick up on a lot of the time. You ever get the feeling that you can "tell" some application uses Angular, or React? I know I have, and what stuns me is that I am usually right (not always; stereotypes are still only stereotypes, after all.)
So I think that's one major component of why people care about the programming language that something is implemented in, but there's also a few others:
- Resources required to compile it. Rust is famously very heavy in this regard; compile times are relatively slow. Some of this will be overcome with optimization, but it still stands to reason that the act of compiling Rust code itself is very computationally expensive compared to something as simple as C.
- Operational familiarity. This doesn't come into play too often, but it does come into play. You have to set a certain environment variable to get Rust to output full backtraces, for example. I don't think it is part of Rust itself, but the RUST_LOG environment variable is used by multiple libraries in the ecosystem.
- Ease of patching. Patching software written in Go or Python, I'd argue, is relatively easy. Rust, definitely can be a bit harder. Changes that might be possible to shoehorn in in other languages might be harder to do in Rust without more significant refactoring.
- Size of the resulting programs. Rust and Go both statically link almost all dependencies, and don't offer a stable ABI for dynamic linking, so each individual Rust binary will contain copies of all of their dependencies, even if those dependencies are common across a lot of Rust binaries on your system. Ignoring all else, this alone makes Rust binaries a lot larger than they could be. But outside of that, I think Rust winds up generating a lot of code, too; trying to trim down a Rust wasm binary tells you that the size cost of code that might panic is surprisingly high.
So I think it's not 100% true to say that people don't care about this at all, or that only people who are bored and like to argue on forums ever care. (Although admittedly, I just spent a fairly long time typing this to argue about it on a forum, so maybe it really is true.)
Kids these days... trolling used to require what's now called effortposts.
At best, it seems like a huge diversion of time and resources, given that we already had a working GUI. (Maybe that was the intention.) The arguments for it have boiled down to "yuck code older than me" from supposed professionals employed by commercial Linux vendors to support the system, and it doesn't have Android-like separation — a feature no one really wants.
The mantra of "it's a protocol" isn't very comforting when it lacks so many features that necessitate workarounds, leading to fragmentation and general incompatibility. There are plenty of complicated, bad protocols. The ones that survive are inherently "simple" (e.g., SMTP) or "trivial" (e.g., TFTP). Maybe there will be a successor to Wayland that will be the SMTP to its X400, but to me, Wayland seems like a past compromise (almost 16 years of development) rather than a future.
Furthermore, all of these options can be enabled individually on multiple screens on the same system and still offer a good mix-used environment. As someone who has been using HiDPI displays on Linux for the past 7 years, wayland was such a game changer for how my system works.
Also, by "commercial linux vendors", you do realize Wayland is directly supported (afaik, correct me if wrong) by the largest commercial linux contributors, Red Hat, Canoncial. They're not simply 'vendors'.
I don't know if others have experienced this but the biggest bug I see in Wayland right now is sometimes on an external monitor after waking the computer, a full-screen electron window will crash the display (ie the display disconnects).
I can usually fix this by switching to another desktop and then logging out and logging back in.
Such a strange bug because it only affects my external monitor and only affects electron apps (I notice it with VSCode the most but that's just cause I have it running virtually 24/7)
If anyone has encountered this issue and figured out a solution i am all ears.
I guess we’ll see if that development is ever applied to the main branch, or if it supplants the main X branch. At the moment, though… if that’s the future of X, then it is fair to be a little bit unsure if it is going to stick, right?
The OpenBSD people are still working on Xenocara, and it introduces actual security via pledge system calls.
Funny enough, the my first foray into these sort of operating systems was BSD, but it was right when I was getting started. So I don’t really know which of my troubles were caused by BSD being tricky (few probably), and which were caused by my incompetence at the time (most, probably). One of these days I’ll try it again…
Development of X11 has largely ended and the major desktop environments and several mainstream Linux distributions are likewise ending support for it. There is one effort I know of to revive and modernize X11 but it’s both controversial and also highly niche.
You don’t have to like the future for it to be the future.
And sadly wayland decided to just not learn any lessons from X11 and it shows.
Odd. Xorg still works fine [0], and we'll see how XLibre pans out.
[0] I'm using it right now, and it's still getting updates.
- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.
I understand the frustration, but I see a lot of "it's completely useless" and "it's a regression", though to me it really sounds like Wayland is an improvement in terms of security. So there's that.
Citation. None of the other desktops have slowed with Wayland, and gaming is as fast as, if not marginally faster on KDE/Gnome with Wayland vs LXDE on X.
Great to know there's work on the wayland support front.
Also, writing it in Rust should help bring more contributors to the project.
If you use Xfce I urge you to donate to their Open Collective:
I will seek to dive-in to how Wayland API actually works, because I'd really like to know what not to do, when the wrappers used 'wrong' can crash.
I left Gnome 3 for other WMs (eventually settled on cinnamon), but every once in a while I decided to give Gnome 3 a try, just to be disappointed again. I felt like those people in abusive romantic relationships that keep coming back and divorcing over and over again. "Oh, Gnome has really changed now, he won't beat me again this time!".
In case you weren't there, the "even" kernels (e.g. 2.0, 2.2, 2.4, and 2.6) were the stable series while the "odd" kernels (e.g. 2.1, 2.3, 2.5) were the development series, the development model was absolutely mental and development moved at a glacial pace compared to today's breakneck speed.
The pre-git days were less than ideal. The BitKepper years were... interesting, politically and philosophically speaking.
Also, KDE4 was a dark, dark period.
Then we'll make Wayland 2.
I have an old Thinkpad. Firefox on X is slow and scrolls poorly. On wayland, the scrolling is remarkably smooth for 10 y/o hardware, and the addition of touchpad gestures is very nice. Yes, there's more configuration overhead for each compositor, but I'm now accepting this trade.
Could you expand on why you describe Hyprland and XFCE4 as "a cursed combination"? Might provide some insight as to why the official XFCE project decided to create their own compositor.
If an application is written for Wayland, is there a way to send its windows to (e.g.) my Mac, like I can with X11 to XQuartz?
Currently I can:
$ ssh -X somehost xeyes
and get a window on macOS.`$ waypipe ssh somehost foot`
You need waypipe installed on both machines. For the Mac, I guess you'll need something like cocoa-way (https://github.com/J-x-Z/cocoa-way). Some local Wayland compositor, anyway.
Now the last 3 times I tried Wayland everything ended up a blurry mess and some windows just ended up the wrong size, so.
I suppose I'll just keep holding out hope.
GNOME was cool during the sawfish days.
I hope XFCE preserves this, it is a killer feature in today's world.
I wonder how long it'll take them writing a compositor from scratch.
I've been using popos for a while, but xfce will always have a place in my heart.
If it had tiling support I'd probably use it still. Being so lightweight is a massive boon.
What would you have them replace it with?
If they ever move away from GTK (due to the GNOME shenanigans GNOME-izing GTK) I wish Englightenment and Xfce were together a single thing. But that's if I could ask the Tux genie three wishes.
Are you also willing to maintain it?
Do note that I've never tried to croudfund a programmer, but that's something that I have to believe is possible to do.
[0] <https://github.com/X11Libre/xserver?tab=readme-ov-file#i-wan...>
People like to frame things like the waylands are some sort of default and nothing is being lost and no one is being excluded.