1. By the author - https://news.ycombinator.com/item?id=44034961 2. Ubuntu Publication - https://news.ycombinator.com/item?id=44306892
And this post.
--
`edit` doesn't even support syntax highlighting (atleast, out of the box when I tried it).
The trick is doing it while keeping the binary size small, so tree sitter is not an option.
Prose thrives in the terminal. Ice and Fire was written in WordStar, as just one popular example.
He's averaging a hundred pages a year. Maybe not the fastest, but certainly not the slowest writer. With the size of his books... Cut the guy some slack.
I can't find the link but I think at some point she compiled her own nano with some "helpful" feature patched out again.
For starters, there's your assumption that there is "syntax" to be highlighted. Not every text file is something written in a computer programming language.
In fact I'd put money on it, but sadly do not have any evidence to back it up.
If you have evidence to the contrary I'd be intrigued!
But these are amateur geeks or geeks in the making who probably don't mind having the capability of syntax highlighting built in, even if for some purposes they want it turned off.
Nano also links against ncurses, which is about as big as the compressed tarball for micro. I'm looking at the dependency closures of each right now in nix-tree[1], and micro's closure's total size is 15.04 MiB while nano's is 12.78 MiB-- not really "orders of magnitude" (as a sibling commenter suggests) when you look at it like that.
Admittedly, nano's dependencies (`file` and `ncurses`, on my system) are likely to ship as part of the "base system" of any Linux distro anyway; the real size it adds to any distro is negligible. But there's no indication to me that micro is meaningfully "bloated", as the meme goes; it seems like what is required to run it is reasonable and comparable to other tools that serve the same purpose.
--
1: See: https://github.com/utdemir/nix-tree ; Try `nix run nixpkgs#nix-tree -- $(nix build --no-link --json nixpkgs#nano | jq -r .[0].outputs.out)`
I installed nano with CUA keybindings instead.
Not caring about a couple of megs here and there is what makes some modern systems so bloated.
i know that i can press like 3-4 arbitrary buttons to mark a block to move it to a different place - how about i just mark it with my cursor and CTRL-X CTRL-V, like every freaking other program out there.
i appreciate that i got VI on freshly installed or secured servers, but for things i use daily, i just want it to be KISS. already counting on people answering 'but vim is easy and simple'. opinions differ i guess.
Setting up a decent environment is also a huge pain to get started with, but nowadays you can just hop into a prewarmed pool with premade setups like Normalvim or LunarVim.
But usability is not just "is it easy to learn", it's also "once i know it, how hard is it to use"
Once the moves are ingrained in your (muscle-)memory it becomes so incredibly efficient. di{, dat, yaf etc. are just the low hanging fruit, once you start with regex, macros and plugins the fun really begins.
But before I learned to ride a bike, I used training wheels, and before I learned enough vim to enjoy using vim, I leaned on nano.
When someone is first learning to explore GNU/Linux, or even to dig into the Unix guts of macOS, they're learning a whole new world, not just a new text editor. For some people, strategic bridges to what they know (like CUA or Windows-like shortcuts) can make this process more fun and less fatiguing. Sometimes that difference is decisive in keeping someone motivated to learn and explore more.
Anyway, I think vim is worth learning (and maybe some of the quirks of old-school vi, if you expect to work on old or strange systems). It's not a matter of if I recommend that someone learn vim, but when. And until it's time for them to explore an editor deeply, micro seems like a great fit for most people.
I also want to say: as enthusiasts of Unix-like operating systems, or as professionals who appreciate some of their enduring strengths, should we really embrace a "because it's there" doctrine? Isn't that same kind of thinking responsible for huge, frustrating piles of mediocrity that we work with every day and resent?
ss someone who loves an ecosystem built first by volunteers as "just a hobby, nothing big and serious", I will it's sad, if not hypocritical, to dismiss software projects just because they aren't already dominant players. Most software I love was once marginal, something its users went to lengths to install on the systems they used because they enjoyed it more than the defaults. We should, to the extent practical, try to leave a little room for that in the way we approach computing— even as we get older and grumpier.
My "project file" was `e.bat` with `edit file1.cpp file2.cpp file3.cpp`, as it was one of the few editors that I knew that had a decent multi file support with easy switching (alt-1,2,3 ..). I still continue remapping editor keybindings to switch to files with alt/cmd-1,2,3,.. and try to have my "active set" as few of the first files in the editor
It wasn't a great code editor, as it didn't have syntax highlighting, and the indent behaviour wasn't super great (which is why in my early career had my indent was two spaces, as that was easy enough to do by hand, and wasn't too much like tab). But I felt very immediate with the code anyway.
I knew that many others used editors like `qedit`, but somehow they never clicked with me. The unixy editors didn't feel right in dos either.
Quickly trying this, it doesn't seem to switch buffers with the same keybindings, even if it does seem to support multiple buffers.
And it wasn't just similar. It was literally the same. EDIT.COM simply started QBASIC up with a special flag. One could just run QBASIC with the flag. As I said at https://news.ycombinator.com/item?id=44037509 , I actually did, just for kicks.
Rust + EDITOR.COM is kind of like remaking/remastering an old video game.
This is not a rewrite. Maybe it’s slightly inspired by the old thing, especially with having GUI-style clickable menus (something not seen often in terminal editors), but it’s much more modern.
CUA-style menubars aren't that uncommon in textmode editors. Midnight Commander's editor has traditional menubars with much more extensive functionality, as does jedsoft.org's Jed editor. Both of these also support mouse input on the TTY console via GPM, not just within a graphical terminal.
I guess they thought that inheriting 25 years of C code was more trouble than designing a new editor from scratch. But you'd have to ask the devs why they decided to go down that route
Yes, but you have to put `set mouse` into your nanorc.
Look at the amount of contributors here. This project was probably some strategic investment. It did not come to existence overnight.
First of all, an empty list of dependencies! I am sold! It works great. I can't believe the did a whole TUI just for this, with a dialogs a file browser. I want to use for a project of mine, I wonder how easy it is. If someone involve in the project is here, why not use Ratatui?
Code quality is top notch, can only say one thing:
Bravo!
I am surprised Micrsooft didnt use the opportunity to create a micrsoft specific Linux distro that replaces bash with powershell, or Edit with vim, nano and other choices as well as .NET and Visual Studio Code by developer installs.
Micrsoft could have used this as their default WSL install.
It may not have won the war against typical distro like Ubuntu or Debian but it could have gained a percentage and be a common choice for Windows users - and there are a lot of Windows users!
Microsoft cannot dominate the Linux kernel but it can gain control in userland. Imagine if they gained traction with their applications being installed by default in popular distributions.
This Microsoft Edit is available for Linux, like Powershell is and others. If they had played their cards right -- perhaps -- 10 years ago, their distribution could have been in the top 5 today, all because many windows users use it as their WSL.
Giant companies (like M$) can inject their fingerprints into my personal space. Now, we just need Micrsooft Edit to have Co-Pilot on by default...
Only problem is that the NT kernel in many ways is much better than the Linux kernel design wise (for example, the NT kernel can handle a total GPU driver crash and restore itself, which I think Linux would really struggle with - same with a lot of other drivers).
But Windows is increasingly a liability not an asset for Microsoft, especially in the server space. Their main revenue stream is Azure & Office 365 which is growing at double digits still, with Windows license growth flat.
At a minimum I'd expect a Linux based version of Windows Server and some sort of Workstation version of Windows, based on Linux.
You may not understand how important Microsoft considers backwards compatibility. Switching to a Linux kernel would eliminate all of that, and that is simply not an option for Microsoft.
The Linux kernel is missing a lot of esoteric things that the NT kernel has and that people use a lot, as well.
Windows as we use the word today (any variant) will not ever switch to a Linux kernel.
I do hope one day that Microsoft put a proper GUI on Linux though, no X, no Wayland, but something smarter and better than those. Probably also not likely to happen but I’d love to see it if they could do it well.
Now that I say that, though, that does sound like a Microsoft kind of move. They do love other platforms "to death."
People use it as if to say that there should never be anything new.
Most developers don't want to use Linux at all. Many developers don't even really know how to user a terminal and rely on GUI tools.
First of all, I disagree with this comment.
However, lets assume you are right.. that the average "Windows Developer" has little to zero skills in GNU/Linux.
If that is the case, it proves my point EVEN MORE that Micrsofot missed out creating a Microsoft Linux Distro... designed to have Powershell, Visual Studio Code, Edit, and potentially Edge, SQL Server, etc.
It would still be Linux but keeping to what they know in Windows -- and would have given Microsoft more power in the linux world.
You can disagree all you want. It is simply the truth. I've contracted in the UK and Europe. Most devs don't even know you can tab complete most commands in modern shells (IIRC cmd.exe supports this). This is both Microsoft Shops and shops that use opensource stacks e.g. LAMP and similar.
I was in a large company in the NW and I knew two developers in a team of 30 that knew basic bash and vim.
There is a reason why "how I exit from vim" is a meme. Most people have no idea how to do it.
> If that is the case, it proves my point EVEN MORE that Micrsofot missed out creating a Microsoft Linux Distro... designed to have Powershell, Visual Studio Code, Edit, and potentially Edge, SQL Server, etc.
Respectfully you seem to have never worked with the people I describe. You listed PowerShell as if they would use it. A former colleague of mine was quizzed why he would use PowerShell to write a script that would run on a Windows Server. They had expected him to write a C# program.
I have worked for various companies as well, UK, Netherlands, etc. Yes, from my experience, working for jobs in a Windows environment (Windows development) will have less knowledge of bash or linux in general if they simply are not using it. These are developers using Windows, SQL Server, .NET, and other Microsoft-focused products.
I would agree that Windows developers have less skills with a shell, even CMD.. or much less Powershell. However, if we are going to FOCUS on this userbase, they are likely to be accepting to using a WSL Linux distro created by Microsoft bundled with powershell, .net, etc.. than to use Ubuntu with bash, vim/nano or variants.
Also, I have worked for Companies that focused on LAMP development and their linux skills were decent to pro. The only time someone would struggle is likely because their are junior level.. and coming from a Windows background.
> Respectfully you seem to have never worked with the people I describe. You listed PowerShell as if they would use it. A former colleague of mine was quizzed why he would use PowerShell to write a script that would run on a Windows Server. They had expected him to write a C# program.
Powershell... C#... both of which are Microsoft. Powershell is .NET under the hood. Doesn't change my comment.
(looks at the install numbers for Linux vs Windows in the server space) I'm not so sure.
Yes. That is the majority of developers. I had to explain to a dev today (nice enough guy) that he has to actually run the tests.
> Windows is legacy, the future is in open source.
You can claim the future is opensource but the industry has moved towards SAAS, PAAS, IAAS which is even more lock in than using a proprietary OS such as Windows.
So while you might have an opensource OS, many of the programs you use will be proprietary in the worst way possible.
You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
Fortunately, Linux users can also avail themselves of a graphical interface as well.
I really shouldn't have to explain what follows. But I will.
Installing any dev tooling that is third party is done on the command line. Look up the instructions for installing Node LTS on Debian, or .NET, or Golang. You need to use the command line. Even on easier to use Distros they have the same procedure. Depending on the tooling you may need to set additional environment variables which are normally done in your .bashrc or similar.
What normally happens is people blindly copy and paste things into the terminal and don't read the documentation. This has been a problem on Linux since before Ubuntu was released. This isn't just limited to newbies either.
The state of GUIs BTW isn't great. Many of them look nice, and work reasonably well most of the time, *until they don't* e.g. If I double click a deb to install it, sometimes it will install. Other times it won't. So I don't even bother anymore and just use dpkg/apt. BTW it isn't any better with other distros. So I have to drop to the command line to fix the issue anyway.
So at some point you will need to learn bash, learn to read the man pages, and manually edit configuration file. It is unavoidable on Linux.
The last one didn’t do so hot, they named it “Xenix”
On the other hand, according to AT&T, Xenix accounted for about half of the worldwide Unix licenses in the late 1980s.
You're confusing Microsoft's first-party Linux distro Azure Linux (nee CBL-Mariner) that is intended as a regular MS-supported OS for containers, VMs, servers, etc, with various Windows-like skins for Linux DEs that people have made for years.
Sorry I dont understand the point you are making.
I did not suggest they had a "secret distro" I am suggesting they could have claimed a share of dominance in the Linux Distro as the default WSL distro.
Yes, but how do they make money by doing this.
Unlike the socialist hiveminds that end up being behind the distros. Microsoft has salaries and bills to pay.
As far as I've always seen, everyone loves to leech on Microsoft's free stuff but nobody wants to pay for a product.
Besdies, for new PC/Laptops come bundled with Windows, Microsoft has made an agreement made with various retailers to come with Windows (Home edition) preinstalled. So in some ways, Windows is free for the User unless they pay for Professional edition, or whatever is offered today.
Of course, the average user will create a microsoft account to complete the install. :-)
Besides the Windows OS -- it is really the Services they provide.. Azure, Office365, SQL Server, PowerBI, etc. I would say THIS is where a lot of the money comes from... business willing to pay for them!
I work for Companies that are willing to PAY for these things - all for "Support"
If something goes wrong.. raise it with Microsoft. Even if I know what the problem is, it is all about the ticketing system. Throw it to Microsoft and carry on.
Despite the above, Microsoft also have "Free" software. They have started to Open Source many of their software.. allowing Linux support as well as Windows. Visual Studio Code, SQL Server, Powershell, etc.
It comes back to my point. When they presented WSL - they could have provided a "MS Linux" Distro, all promoted as "ease for Windows users" and if it became a popular distro, would have pushed micrsoft to have more control in userland... which would have alienated most Windows users away from Ubuntu, etc.
Like Windows, it is a method to keeping your userbase to rely on what they know overall.
What turbo vision brought to the game was movable, (non) modal windows. Basically a lot of rewriting that array in a loop. Pretty snappy. I made a shitload of money with that library.
Admitted, a few things have changed in last couple of years. MATLAB is being replaced by Python. Teaching 8085 & 8051 is being replaced by RasPi/Arduino. 8086 is taught alongside ARM & RISC, and not touted as SoTA.
I last saw Turbo being used in 2016-17 in a university setting, inside a DosBox (because Windows 7+ have dropped support for such old programs). Insane, but true.
I once asked an Indian colleague why Indians use US/UK-nonstandard English like "kindly", "do the needful", and "revert".
He thought about it a minute, then said "Oh, the texts everyone uses to learn English say that proper letters must always begin with 'Kindly,'".
Sokath, his eyes uncovered.
It's never the compiler until it's the compiler. Just didn't expect it during some simple fun coding at home. :)
It's not. They needed a small TUI editor that was bundled with Windows and worked over ssh.
Arrays in TP were laid out in row-major order, and each character was represented by two bytes, one denoting the character itself and the other the attributes (foreground/background color and blinking). So, even better, array[1..25, 1..80] of packed record ch: char; attr: byte end absolute $B800:0000.
Replace $B800 with $B000 for monochrome text display (mode 7), e.g., on the Hercules.
That means there's always an opportunity for the resourceful.
So good.
[0]: https://charm.sh/
I remember you could use it in a batch file to script some kinds of editing by piping the keypresses in from stdin. Sort of a replacement for a subset of sed or awk.
I haven't tried but this should be possible with vi too. Whether that is deeply cursed is another question.
Refreshing to see employees can have fun in a multi billion dollar company.
It's impressive to see how fast this editor is. https://github.com/microsoft/edit/pull/408
> By writing SIMD routines specific to newline seeking, we can bump that up [to 125GB/s]
Who's editing files big enough to benefit from 120GBps throughput in any meaningful way on the regular using an interactive editor rather than just pushing it through a script/tool/throwing it into ETL depending on the size and nature of the data?
Typically we just hand edit them. Actually been pleasantly surprised at how well VS Code handles it, very snappy.
As developers, we rotinely need to work with large data sets, may it be gigabytes of logs, csv data, sql dump or what have you.
Not being able to open and edit those files means you cant do your job.
Sure, maybe by switching to linux you can squeeze out an extra CPU core's worth of performance after you fire your entire staff and replace them with Linux experienced developers, and then rewrite everything to work on Linux.
Or, live with it and make money money money money.
Subject, of course, to Microsoft allowing you to continue to use their software.
If you build the world's widest bike, that's cool, and I'm happy you had fun doing it, but it's probably not the most useful optimization goal for a bike.
Fuzzy search, regular expression find & replace.
I wonder how much work is going to continue going into the new command? Will it get syntax highlighting (someone has already forked it and added Python syntax highlighting: https://github.com/gurneesh9/scriptly) and language server support? :)
Add on a well-built plugin API, and this will be nominally competitive with the likes of vim and emacs.
> The goal is to provide an accessible editor that even users largely unfamiliar with terminals can easily use.
/rant Today I spent 3 (three) hours trying to setup a new MSI AIO with Windows Pro. Because even though it's would be joined to the local ADDS and managed from there - I need to join some Internet connected network, setup a 3 stupid recovery questions which would make NIST blush and wait another 30 minutes for a forced update download which I cannot skip. Oh, something went wrong - let's repeat the process 3 times.
I've met biologists who enjoy the challenge of vim, but they are rare. nano does the job, but it's fugly. micro is a bit better, and my current recommendation. They are not perfect experiences out of the box. If Microsoft can make that out of the box experience better, something they are very good at, then more power to them. If you don't like Microsoft, make something similar.
mcedit ?
Wrongly phrased scenario. If you are running this cluster for the biologists, you should build a front end for them to "edit SLURM scripts", or you may find yourself looking for a new job.
> A Bioinformatics Engineer develops software, algorithms, and databases to analyze biological data.
You're an engineer, so why don't you engineer a solution?
The previous HN posts which linked to the blog post explaining the tool's background and reason for existing on Windows cover it all a lot better than a random title pointing to the repo.
But.. why?
As with .net, it is not intended to let you easily get away from Microsoft.
https://learn.microsoft.com/en-us/powershell/scripting/whats...
Is there supposed to be a single elected shell for Linux? Powershell on Linux is just one of plenty others.
I just wonder what was the reason to port it and then I would like to have a word with a real living person who is actually using that shell.
It's object-oriented approach is nice to work with and provides some nice tools that contrast well with the Unix "everything is text" tooling approach. Anything with a JSON output, for instance, is really lovely to work with `ConvertFrom-Json` as PowerShell objects. (Similar to what you can do with `jq`, but "shell native".) Similarly with `ConvertTo-Json` for anything that takes JSON input, you can build complex PowerShell object structures and then easily pass them as JSON. (I also sometimes use `ConvertTo-Json` for REPL debugging.)
It's also nice that shell script parameter/argument parsing is standardized in PowerShell. I think it makes it easier to start new scripts from scratch. There's a lot of bashisms you can copy and paste to start a bash script, but PowerShell gives you a lot of power out of the box including auto-shorthands and basic usage documentation "for free" with its built-in parameter binding support.
EDIT.COM, on the other hand... nice and straightforward in my book
I can definitely see msedit having a useful place.
I might use nano via wsl (Or at that point just nvim), but that also has it quirks
It occupies the same space as micro did for me, but it's / it will be preinstalled so it's better (Also a reason I even cared for vi at first)
This particular application is incredibly basic -- much more limited than even EDIT for DOS.
Insane that we don't have TUI in remote session in 2025.
Windows ships an official OpenSSH server these days, but so far there haven't been any good official text editors that work over OpenSSH, as far as I know.
I've had to resort to "copy con output.txt" the few times I needed to put things into a text file over windows-opensshd...
SqlServer like it's the one that found sql or it's the only product that serves sql.
Sure "chcp" is a mouthful, but "del" or "erase" makes as much sense as learning that "rm" is short for remove. You pick up either convention quickly enough, except that I'm constantly using "where" when I meant "which". Maybe I should make an alias or something.
Don't get me started on powershell's look-we-can-use-proper-words-lets-see-how-long-we-can-make-this.
...wait for it...
...Project.
Was charged with managing a department-wide installation about fifteen years back, now. You want to have fun looking for relevant docs, try a search on "Microsoft Project". Good times!
I think the one exception to Microsoft's generic naming convention is Excel. Visio probably qualifies, too, but they bought that from someone else.
Oh, and I guess PowerPoint, too.
Apple has Pages, Numbers, Keynote, etc. Google has Drive, Docs, Sheets, etc. Meta has Messenger. Far too many examples to list.
Conversely, it would be ridiculous to use non-obvious names.
They aren't trademarking it and probably can't.
But there's no reason they anyone can't use generic naming for their products. Many software applications do and quite frankly its more descriptive to attracting new users than coming up with non-real names.
I would aruge the only reason made up names exist is to keep marketing departments employed trying to explain to users what they are needlessly.
Manage configuration, and external dependencies such as lsps with nix.
Then have separate nix shells for each project to load tooling and other dependencies in an isolated/repeatable session. Add in direnv to make it more seamless development experience.
...
Anyways, here's how to tell if your LED sign is cheap!
It blipped on my radar recently when I did a sidequest into LuaJIT.
Also, just made a PR to add Nix flake support to Edit:
It was my favorite editor back in the old days.
It worked, did the basics really well and got the job done. Glad to see it’s back.
Fun project #2: Port to MS-DOS (with DPMI)
Fun project #3: Port to 16-bit MS-DOS (runs on original 8086)
Oddly, it looks more like Borland's editor.
The screen shot says differently.
This editor doesn't have delusions of grandeur, it focuses on usability more than features. and it is better for it.
Instead of donating to Nano devs, or hire some of them or something.
Stupid corp at their finest.
Which is pretty neat.
msedit's key-bindings are based on IBM CUA. It's immediately familiar to a great many people.
But I’m glad someone wrote one of these in rust.
https://github.com/microsoft/edit/pull/534
Note that another editor called Micro is very similar:
The one thing that vexed me for something based on edit, was CTRL+P being hijacked for something that isn't print, is like we forgot about about CUA over the last 15 years.
While Satya might have made the change Microsoft <3 FOSS, the Gates/Balmer era was much better towards Windows developers.
Now we have a schizophrenia of Web and Desktop frameworks, and themselves hardly use them, what used to be a comfortable VS wizard, or plugin, now is e.g. a CLI tool that dumps an Excel file, showing that newer blood has hardly any Windows development culture, or their upper management.
As you may have guessed, this simply pushes out smaller devs. This used to NOT be like this. It should NOT be like this.
EV certificates has always felt like an utter scam and extortion to me. At least now there is an alternative.
10 years ago I wanted to build a Love2D game, and release it for the three major OS's. The .love files are effectively ZIP archives, kinda like cartridges, but you need the correct Love2D version (they broke API compat every year or so). Windows and Mac used to be: "cat love.exe game.zip > game.exe".
Linux gave me the most crap, because making a portable, semi-static build was a nightmare; you couldn't rely on distros because each one shipped a different version of love.
Now Linux is actually becoming more viable, not because it's making that much progress, but because the two mainstream platforms are taking steps back.
And game consoles naturally.
Apple is never first to do something.
I started coding for J2ME on a Vodafone contest, based on Sharp GX20, which was using DOCOMO APIs in 2003.
Afterwards I joined Nokia, so I kind of had an idea how we, and our competition was doing in the market.
US was the only market that stayed PDA centric, with exception of Blackberry adoption, until the iPhone came to be.
Traditionally it was the only market where Nokia had issues.
You can use an ad-hoc signature to sign, but people who download the app will still have to jump through hoops to run it.
Bury it as deep as Microsoft wants, but...
1) Everyone can use it
2) It turns off all nanny-checks
3) It makes future checks opt-in instead of opt-out
That random exe link is signed by Microsoft.
This is not to say that WinForms isn't without its problems. I often wonder what it could be like if all the effort of making WPF and MAUI had gone into maintaining, modernizing and improving it.
My only major problem with winforms is that it's still using GDI under the hood which, despite what many people believe, is actually still primarily software-rendered. If they could just swap out Winforms for Direct2D under the hood (or at least allow a client hint at startup to say "prefer Direct2D") it would really bring new life to Winforms, I think.
I would also like a C++ native GUI API that's more modern than MFC
There have been similar F# libraries and third-party C# libraries for a while that seem nice to work with in similar ways.
[1] https://learn.microsoft.com/en-us/windows/apps/windows-dotne...
MFC was already relatively bad versus OWL. Borland[0] kept improving it with VCL and nowadays FireMonkey.
There there is Qt as well.
Microsoft instead came up with ATL, and when they finally had something that could rival C++ Builder, with C++/CX, a small group managed to replace it with C++/WinRT because they didn't like extensions, the irony.
With complete lack of respect for paying customers, as C++/WinRT never ever had the same Visual Studio tooling experience as C++/CX.
Nowadays it is in maintenance, stuck in C++17, working just good enough for WinUI 3.0 and WinAppSDK implementation work, and the riot group is having fun with Rust's Windows bindings.
So don't expect anything good coming from Microsoft in regards to modern C++ GUI frameworks.
[0] - Yes nowadays others are at the steering wheel.
Firstly, that nobody believes them when they swear that {new GUI framework} will be the future and used for everything. Really. Because this time is not like those other times.
Secondly, pre-release user feedback. Ironic, given other parts of Microsoft do feedback well.
Imho, the only way MS is going to truly displace WinForms at this point is to launch a 5-year project, developed in the open, and guided in part by their community instead of internally.
And toss a sweetener in, like free app signing or something.
Having said this, from 3rd parties, Avalonia is probably the best option.
While I think Uno is great as well, they lose a bit by betting on WinUI as foundation on Windows, and that has been only disappointment after disappointment since Project Reunion.
It quickly became apparent that WinUI3 was the only one even close to viable for our use case, and we tried getting a basic prototype running with out legacy backend code. We kept running into dealbreakers we hoped would be addressed in the alleged future releases, like the lack of tables, or the baffling lack of a GUI UI designer (like every other previous Win framework).
...We're currently writing our GUI in Qt.
A requirement for the tool is that it must remain as small as possible, so that it can be included in the smallest distributions of Windows, like Nano Server. It is the rescue text editor there.
I’m sure plugins are going to do all the things that everyone doesn’t want (or does want) but the default edit.exe will remain small, I’d bet money on it.
I was literally trying to configure Wireguard to get around the ISP issues.
What happened to pride or quality control or anything?
Sounds like some dangerous cowboy coding wrongthink you've got going on over there.
We are talking about Microsoft here.
Microsoft didn’t become dominant because of the quality of their software. They became dominant because they knew how to run a an aggressively successful business better than their competitors.
I had to open Notepad and see it for myself. Wow! I see the Icon.
I remember Co-pilot just suddenly appearing in my taskbar and finding it annoying. Despite removing it, I still see it lurking around... and now I see it is a SIMPLE TEXT EDITING PROGRAM named Notepad.
Wow.
Look at Outlook. Literally less than 25% of the screen appears to be dedicated to email content. I say literally because I physically measured it and from what I remember it was 18% to 20%. Microsoft keeps adding these gigantic toolbars that each have duplicate buttons that often can’t really be adjusted, removed, or hidden. Or it may be an all-or-nothing scenario where something can be removed but then you can’t e.g. send emails.
Rather than fixing the problem, the solution is to add a new toolbar. This frequently keeps happening. Just one more toolbar with a select subset of buttons in one place so people can find it. Well now… We have some extra whitespace… Let’s throw in the weather there and why not put the news in too. What could possibly go wrong?
And then loading the news, some totally unrelated and non-critical feature they shove in forcefully by default frequently has at least one critical severe bug where there’s an async fetch process that spikes the cpu to max and crashes the whole system. There’s no way to disable news without first loading outlook and going into advanced settings, which of course is past the critical point of the news being loaded.
Go look at like Outlook 2003. It is nearly perfect. It’s clean, simple, and there’s no distractions. This is so amazing, like many Microsoft products that seem to be built by engineers, but I don’t know how we get to modern outlook that feels like it has 10 to 50 separate project manager teams bloating it up often with duplicate functionality.
This would be bad enough, but then again instead of fixing it like I said before or fixing it by reducing or consolidating teams or product work, we get ANOTHER layer of Microsoft bloat by having multiple versions of the same product. So we have Outlook (legacy) named that way to make you feel bad for using an old version, or named to scare you into believing it won’t be supported. Then there’s Outlook (New). Then there’s Outlook (Classic) which isn’t legacy or new but is a weird mix of things. Then there’s a web version that they try to force everybody into because it’s literally perfect and there’s no reason not to use it… Somehow they didn’t catch that emails don’t load in folders unless you click into them, or sorting rules don’t work the same or don’t support all the same conditions. Rather than fixing it, you get attacked for using edge case frivilous advanced obscure functionality. Like who would want to have emails pre-sorted into any folder except inbox? Shame on you for using email wrong I guess.
I’ll skip over the part where there’s multiple versions of the multiple forks of outlook. But there’s also Government, Education, Student, Trial, Free, Standard, Pro, Business, Business pro, Business premium, etc.
The last infuriating point in my rant has to come down to their naming standards. For some reason they keep renaming something old to a completely new name and of all the names they could pick, it’s not only something that already exists but it’s another Microsoft product. This is a nightmare trying to explain to somebody who is only familiar or aware of either the old or the new name and this confusion is often mixed even on a technically capable and competent team. For bonus points, the name has to be something generic. Even like “Windows” which is not a great example because the operating system is so popular but you can imagine similarly named things causing search confusion. Or even imagine trying to search for the GUI box thing that displays files in a folder within the operating system, also called a window, and try to imagine debugging an obscure technical problem about that while getting relevant information.
There’s so many Microsoft moments that things like adding AI to notepad hardly phase me anymore. I don’t like that they do that but I wouldn’t necessarily be so offended if their own description they came up with in the first place was what you mentioned. Constantly going against their own information they invented themselves and chose to state as a core statement just irritates me.
The user interface is littered with useless crap, the File menu goes back to this weird completely new different UI layout etc etc.
And the best part is that if the VPN goes temporarily down it fails to send/receive new emails until it has been restarted.
Let me say that again.
It fails at its core functionality if there's a glitch in the network and cannot send or receive emails. That's just a next level of incompetence.
Yes, even when running on an unconnected session on a Windows server/VDI somewhere.
Microsoft has seemingly sucked at naming things since at least the mid-90s. It's effectively un-search-engine-able, but I recall that in the anti-trust action in the mid-90s a Microsoft person was trying to answer questions about "Internet Explorer" versus "Explorer" (as-in "Windows Explorer", as in the shell UI) and it was a confusing jumble. Their answers kept coming back to calling things "an explorer". It made very little sense. Years later, and after much exposure to Microsoft products, it occurred to me that "explorer" was an early 90s Microsoft-ism for "thing that lets you browse thru collections of stuff" (much like "wizards" being step-by-step guided processes to operate a program).
Also, playing-back my "greatest hits" comment re: Microsoft product naming: https://news.ycombinator.com/item?id=40419292
…but is it really less secure than brew or choco? The installers are coming from reasonably trusted sources and are scanned for malware by MS, a community contributor has to approve the manifest changes, and the manifests themselves can’t contain arbitrary code outside of the linked executable. Feels about as good as you can get without requiring the ISVs themselves to maintain repos.
There are ISVs that would like to lock down their software so they can maintain it but a trillion dollar company couldn't spare a dollar to figure out a "business process" to do this. As far as I know, Microsoft has a single employee involved who has laughed off any security concerns with "well the automated malware scanner would find it".
The "community contributors" were just... people active on GitHub when they launched it. Was anyone vetted in any way? No.
The Microsoft Store has actual app reviewers, winget has... "eh, lgtm".
There is no validation when you winget whether or not the executable is from the official source or that a third party contributor didn't tamper with how it's maintained.
HTTPS only guarantees the packets containing the unverified malicious code are not tampered with from the server to you. A server which could very well be compromised and alternate code put in its place.
You are drawing an egregious apples-to-oranges comparison here. Please re-read what you said.
You could serve digitally signed code over plain HTTP and it would be more secure than your example over HTTPS. Unfortunately there are a lot of HTTPS old wives' tales that many misinformed developers believe in.
It's trivial for a remote server to hand two different versions of a script with the traditional `curl | bash` pipeline. https://lukespademan.com/blog/the-dangers-of-curlbash/
There is 0 validation that the script that you are piping into bash is the script that you expect. Even just validating the command by copying and pasting the URL in a browser -- or using curl and piping into more/less is not enough to protect you.
> It's trivial for a remote server to hand two different versions of a script with the traditional `curl | bash` pipeline.
I’m confused by this; it seems to be written in the tone of a correction but you both seem to be saying that you get whatever the server sends. (?)
Yes, but I am also saying that you can't verify that the script that is run on one machine with a pipe is the same script that runs on a second machine with a pipe.
The key part of the original statement is the server can choose to send different scripts based on different factors. A curl&bash script on machine 1 does not necessarily mean the same curl&bash script will be run on machine 2.
The tooling provided by a `curl | bash` pipeline provides no security at all.
With winget, there is at least tooling to be able to see that the same file (with the same hash) will be downloaded and installed.
There are ways to do this better, for example, check out https://hashbang.sh. It includes a GPG signature that is verified against the install script, before it is passed to curl.
It's not hard to run the `show` command to see what a winget install will do. https://learn.microsoft.com/en-us/windows/package-manager/wi...
It's easy enough to view the manifests (eg, https://github.com/microsoft/winget-pkgs/blob/2ecf2187ea0bf1...) and arguably, is better then the protection for MITM that you would get using naked cURL & Bash, simply because there are file hashes for all of the installer files provided by a third party.
> They are saying curl is strictly better, not that it is impenetrable
Right. But it arguably is not strictly better.
> You can't trust winget
Again, this is not backed up by anything. I have trust in winget. I can trust that the manifest has at least been vetted by a human, and that the application that will be installed should be the one that I requested. I can not trust that this will happen with curl | bash. If the application that is installed is not the one that I requested, there is tooling a process to sort out why that did not happen, and a way to flag it so that it doesn't happen to other users. I don't have this with curl | bash.
irm <URL> | iex