I’d wildly guesstimate for 70% of use cases you wouldn’t even need 50% of stuff with some slight modifications. The web is just so bloated.
Edit: might as well prune down the css a little too and maybe dump wasm, webgl and canvas
I suppose anything that's gated behind a permission prompt in Chrome/Firefox/Safari could be culled without too much trouble at least.
What about starting a new web then only for the supported subset?
Based on my current browsing experience, this may be a plus in the long run.
The closest example is AMP, but you must be Google to force people to use it.
Could choose a subset that lets certain sites that do not get on everybody's nerves still run fine.
For the remainder, people who need it can run an extension that runs a Chromium converting what's possible to the target subset.
Of course no. You can use a bloated browser for that.
If a lightweight browser could be significantly faster and more secure, people would tolerate using two browsers again. Although Ladybird hasn't reached that bar.
YouTube certainly could use a small set of web standards, although YT regularly breaks on Firefox. It's a video player with links and forms.
Source? I've never had a single problem ever, and I don't know anyone else that has either.
https://mapstodon.space/@hareldan/112619447620823614
https://techhub.social/@weston/112607264644039009
https://pdx.social/@crowdotblack/112604055589800602
https://social.bitwig.community/@gerotakke/11263106899010552...
It's even deliberately designed to not be easily extensible, as to avoid the temptation of adding features.
> For a new project I wonder how much simpler (or secure) a browser could be made if you only allowed a subset of js and browser apis
IMHO the only viable subset is the empty set. There are some surviving HTML-only browsers that are still usable for e.g. viewing documentation or browsing simple-minded websites (like HN, but they are fewer and fewer every year, unfortunately).
I really don't want to drop the all too common negative comment - in particular since I already use an alternative web browser - but the initial investment required just for an MVP seems mind-boggling to me.
I think a basic HTML browser that can automatically delegate all it cannot handle to other apps - PDF viewing to a PDF viewer, video playback to a video player, and JS-requiring things to a big browser - would be interesting (if it already exists, please let me know).
I'm curious if throwing out DOM/js would make the task more approachable. My intuition says yes. But I'm thinking CSS would still make it super difficult. Also I've heard that HTML has some rough areas that make it hard.
That being said I find Netsurf is pretty capable even if I don't really use it very often. Yeah some pages don't render right but it's really fast. So who knows maybe we can get away with a reduced set of features or better yet go back to using separate clients instead of web apps for things like chat, email and forums.
Sometimes I play games, or use "web apps" and using a different tool in those instances would be fine. Back in the early 2000s I remember using Firefox with an "open in IE" extension that allowed me to primarily use FF and fall back to IE when sites were broken. As websites modernized I used the extension less and less.
Also consider desktop apps that use election. Bundling a simpler browser and building the app to the capabilities of that browser could greatly reduce install size and memory usage.
Alternative idea: go the other extreme and stop pretending modern browsers are anything else than virtual machines. Turn browsers into sandboxed VMs that only run Wasm, with backwards compatibility ensured by shared wasm libraries that render HTML and run JS.
Just deciding that you don't want to implement >50% of web specs "for simplicity" and expecting that to be a winning strategy is very HN.
But from the lwn article:
It is written in C++ [..].
Oops! :-)I would build one piece, as a well-documented Pythonic (not in Python; just in coding / documentation style) library.
The reason this is impossible is the monolithic design of these things. There are good reasons for it -- the pieces interrelate -- but I think it's possible to break it up (with a lot of work).
For example:
- A clean, documented JavaScript engine would be a good start.
- Python-style, independent, isolated JavaScript libraries would help (usable serverside or clientside where possible)
- An independent rendering engine would be nice -- again, documented and independent of the above
- Network libraries
- HTML / XML / CSS parsing libraries
... etc.
If this were in place, code could be interchangeable between serverside and clientside much more than today (and usable in other places, such as using JS as a scripting language in other systems).
Test cases for rendering (or even just making a screenshot) wouldn't need the whole browser. You would import and call into the rendering engine to make a .png without selenium.
Making a new web browser would involve mostly glue code and OS-specific code.
> Somewhat ironically, it was not possible to log into Discord using Ladybird. It does a fair job of rendering pages, but speed and stability are still wanting.
Fascinating to see people expecting everything to work out of the box whilst being writter from scratch.
And as an avid rust appreciator, all the comments about "rewrite it in rust" as if that solves anything the OP spoke about is really frustrating.
Let the team cook, if you dont like the dish, help cook it. The project has been super interesting and honestly I feel we need more like this in the browser scope.
468 new features have been added since 2018; 148 have been added in the past 18 months alone [1].
I applaud the effort but unless something fundamentally changes in their approach, I don't see how they can catch up.
Sure, you don't need every single spec, but even the core ones like Flexbox, Grid are large and complicated and are constantly being tweaked.
it's still 500 new things to test and actually implement. but not as bad as the original 1 to 2 change.
html5 was the forever missed opportunity to do things right. but keeping the browser-side effort low was always fought for on the w3 committe.
(Sarcastically saying something is interesting is something I find distasteful.)
Anyway the irony is that the project chose to use a discussion platform which uses lots of modern web cruft and would be a big challenge for a new browser, when they could have chosen a (maybe less capable) simpler platform like IRC or some simple web forum which would more likely have run on Ladybird.
The fact is, Discord as a platform is more accessible compared to IRC.
Guess it depends on what you mean with "accessible". In terms of accessibility for impaired users, IRC is a open protocol vs a service that disallows 3rd party clients. I'm sure there exists better options for IRC than Discord, and at the very least, IRC allows you to access it however you want, be it visual, textual, voice control or whatever.
Besides, Discord being a US company, need to prevent users from Cuba, Iran, Syria (and NK) and is also banned in a bunch of countries like China, UAE and Egypt. So in that sense, Discord seems less accessible than IRC.
Only remaining part is that Discord has a somewhat easier "getting setup" UX than IRC for younger users, as it's more similar to the type of services they're probably already use.
From the LWM article:
> Users will need GCC 13+ or Clang 17, and Qt6 development packages to play along at home. Ladybird compiles and runs on, for example, Fedora 40 without a problem, *but it is a long way from being suitable for regular use.*
Seems distributed binaries are missing, but that's easy to "fix"/"workaround". Is there something else that makes you say it is suitable as a daily driver while LWM author does not?
Even with SerenityOS they long were of the mindset that binaries/ISO/etc. should not be produced and distributed by them apparently.
I'm forking Ladybird and stepping down as SerenityOS BDFL - https://news.ycombinator.com/item?id=40560768 - June 2024 (262 comments)
Plus, the build process is well documented and works out of the box (at least on Ubuntu in my experience) and the community is nice and welcoming.
Yeah, that seems to be a huge problem with many larger FOSS projects.
(Not affiliated in any way)
Maybe it is of interest. (Just a heads up, it dates back to 2018.)
Most of the open source software I use are really rock solid, so I’m not sure what bug or missing feature I would be able to add. I also think as a newbie it’s perhaps useful to work where I can learn a lot in a short amount of time. Those dopamine hits can be pretty awesome :)
I definitely would have preferred the momentum to go to SerinityOS, and perhaps, importing firefox/librewolf into Serinity OS.
I think people are only going to switch once the Firefox user experience is noticeably better for the average person. Google is on track to make that happen after they finally disable Manifest v2 extensions and as they continue their crackdown on ad blockers.
Anything that is not a dirty tactic will not work.
Also, the moment a user got an error they dont understand in certain browser, they blame the browser and change. How do you think firefox got out of business?
Have you tried using Firefox to request an appointment with the Spanish Drive and Vehicle License Administration? It wont work. The civil servant will tell you use chrome.
When a user get 2 or 3 use chrome to work, they will just use whatever it works. They won the browser war, adding ladybug to steal from firefox, is also not going to help either.
Force sites to implement baseline standards and not rely on non-standard features. Disallow major web players from pushing their own browsers and from relying on their own browser's non-standard features. Permanently require browser ballots on all widely used consumer OS's. Heavily fine violators consistently. Use the money to support an independent standards body.
Naive users will get a random, standards compliant browser. This throng will help prevent sites with no concern for the law from testing against only one instead of against the standard.
I would even say, topics on the internet with a lot of lobbying, such as pirating software/media, in the same timeframe, has managed to get absolutely nothing done.
I am afraid, it would not be possible to regulate legally.
Apathy will not bring change. Only speech and action.
It helped other browsers gain market share. Sadly it's not enough alone. Major web players can promote their own browser and sabotage others, even if only by neglecting to test them. IMO a permanent ballot law is needed alongside restrictions from major web vendors pushing their own browser's and relying on their own browsers non-standard features.
> Competing browsers saw their traffic increase,[16] suggesting that these smaller competing developers were gaining users. However, long-term trends show browsers such as Opera and Firefox losing market share in Europe, calling into question the usefulness of the browser choice screen.[1]
Opera is the smaller competitor referred to in both halves and it lost user share in Europe while this was in effect. About the only thing the ballot can claim is a loss in users of the 1st party browser IE but that effect was already occurring prior to the ballot anyways.
Moping every time a (modestly successful) approach is brought up isn't going to move the needle.
Dominant browsers rely on many tricks to gain and hold their position. It will take more than one approach to restore a balance.
I think the browser ballot style thing may have hurt more than it helped in the long run. E.g. users clicking things they don't understand during the initial setup of their computer (while a whole lot of other things they don't really deal with often are going on too) may have actually resulted in some extra short term download hits that immediately scared those users away from spending time trying other browsers out once they realize what their selection meant in terms of change. How many users that clicked on Maxthon and were confused/disappointed with a (then) Trident based browser that wasn't quite IE? Hard to say but the data isn't jumping out to show the opposite.
I agree it would take more than one approach to restore the balance but I disagree that means all approaches are inherently helpful to roll out, let alone inherently key. The ballot initiative didn't result in any measurable change, even against the tide when compared to other regions, and simultaneously focuses both people and discussion away from major issues like the tricks dominant players actually use to gain market share. E.g. Microsoft had IE (and later Edge) bundled as the default choice after the ballot and share continued to decline up until they used these other working tactics which have resulted in it becoming the 2nd most used desktop browser again.
Very easy in userChrome.css. I put mine on GitHub but it's nothing special, there are lots and lots that are available.
Easy should be install a theme/extension, or toggle an option on configurations. Yours can be broken after updates, and requires to know what it is (I do, I just cant be bothered), and redownload and apply after each time an update breaks it.
I have decided to live with the awful floating tabs rather than personalising the userChrome.css, I wish someone could convert it into an extension.
/slow clap
Yes, can all be deactivated, I also use FF, but I do not trust Mozilla anymore.
- Acquire companies like Pocket, Anonym in multi-million dollar deals and also the millions in bonuses that the CEO likes to enjoy.
- At the same time, no significant expenditure towards developing its core software. Firefox is still ridden with bugs. They even went as far as firing the people that used to work on Servo, Rust, WASM, etc.
I think it's clear to them that there's not enough money to be made with small tricks like Pocket, VPN, Relay, etc. Firefox is still the only profitable product and contributes ~90% to Mozilla's revenue. Much of it coming from Google which is the one thing that people have been asking them to be less dependent on.
And we shouldn't be surprised if they double down on making more money off of Google and also introduce ads. Acquiring Anonym, an ads company, implies that it might have already started.
And at the same time greatly raising compensation for the CEO, despite shrinking numbers.
Such a loss to see it jettisoned and abandoned. Myopic decisions like these are filling the cloud of pessimism that now shrouds Mozilla.
Full browser feature set is hard, but an app that bundles its own webview can choose to not trigger the edge cases, so that's a realistic path forward.
Lots of companies with centuries-long record of manufacturing previously durable goods have in the last decade or two switched to using e.g. inferior quality steel to increase profits. That'll destroy the brand, but in the meanwhile there's great profits to be had!
https://en.wikipedia.org/wiki/The_Goose_that_Laid_the_Golden...
The project limped along for a bit, then recently Igalia started putting resources into it: https://news.ycombinator.com/item?id=39269949
But I very much endorse both. And yes, in theory energy should be focused, but I rather have 2 smallish projects, but with potential, than one slightly bigger one, with fighting about direction all the time.
With that said, almost all of Mozilla's revenue comes from Google, which might possibly influence what features they implement, their stance on various web standards etc.
They all tend to lag behind over time, until the fork is eventually too old and it's either abandoned, useful changes I was relying on are dropped, or becomes just too old compared to upstream to be fully compatible (and thus just annoying to use).
Just the burden to upkeep the upstream changes, in either firefox or chrome forks, seems to be significant enough that I'm quite pessimistic on the lifespan of these projects.
You might just as well do your own thing, and don't pretend to be a mainstream browser replacement altogether.
But long term, I would much rather see a truly independent open source browser engine.
Unless you’ve got some examples to back this up, it’s FUD. Posting hypotheticals is how rumours start, and this is just stirring the pot.
And as a bonus, those were added and activated as features via update, without telling. At least for me.
(and paid ads you have on the home screen)
The backlash resulted in studies being opt-in, and I thought it still was but I don't know, I use "policies.json" to setup my browsers.
https://i.imgur.com/S7d4ZXg.png
Ouch, they were new to me and also activated.
And so I feel much more at home with Microsoft Edge and Google Chrome.
And then to do it in C++ when other browsers and kernels are flirting with things like Rust. How long will it take for people to trust this new C++ code? I applaud the effort but am worried it'll be in vain. Then again, with how many projects I start not because they make sense but because I enjoy them, perhaps I should see it more in that way
It's developed by Mozilla, Google's controlled opposition. I submitted a link[0] on ads coming to Firefox but HN shadowed it almost immediately (present on /newest when logged in, nowhere to be found when logged out).
[0] https://www.jwz.org/blog/2024/06/mozilla-is-an-advertising-c...
(JWZ is not a fan of Hacker News and its community.)
(Though it might be a good idea to special-case this to ad a non-referral link at submit time.)
Flagged for being NSFW and devoid of interesting content. The average user, like you, is unaware that the content they are seeing is different from the content you submitted.
Introducing Anonym: Raising the bar for privacy-preserving digital advertising https://blog.mozilla.org/en/mozilla/mozilla-anonym-raising-t...
Oh.
How Brave of them.
The primary value in Firefox existing right now is that the web standards process becomes dysfunctional when there are only two major browser rendering engines, but that is fading away with Firefox’s market share. Hopefully Ladybird can gain enough momentum to matter there because it doesn’t seem like Gecko can maintain its relevance.
Firefox (mostly) caught up with quantum and process isolation on the desktop, but by then I think it was too late. And the android version still has horrible performance, stability, and compatibility compared to Chromium browsers.
Mozilla just doesn't have the same engineering resources to poor into the browser that google does, so I'm not sure there's any way they can really maintain pace with google outside of becoming yet another chromium browser.
On the other hand, things like performance improved drastically, and it is now competitive with Chrome. Firefox the product is in the best shape it ever was.
But while Chrome was growing, the ad campaign was crazy. You could not miss it - Google results, Gmail header, every Adsense anywhere. Everyone was told to install Chrome when they used internet, every day - there was no escape.
https://www.bloomberg.com/news/newsletters/2023-05-05/why-go...
I think you may have made a typo. Possibly $500M?
* Firefox is effectively not a community project. It seems to be ruled with an iron fist by the commercial side of the operation.
* Firefox broke its extensibility - which was the whole rationale of the Mozilla project to begin with.
* Lots of telemetry and call-home mechanisms, so much so that it is difficult to opt out even if you want to - in Thunderbird, and I believe also in Firefox; see : https://superuser.com/q/1672309/122798 (but correct me if I'm wrong and it's an app-specific thing).
If we want the web to be a multi-vendor platform based on open, multi-vendor standards, as it has been for parts of its history, how much does Firefox really buy us? Do we really think that Firefox will take a hard stand against Google if their survival depends on not doing so?
I use Firefox, but I think it's no longer true to its original mission in a lot of ways, Safari's global 18% share goes a lot farther toward making the web a duopolized rather than monopolized platform than Firefox's tiny sliver does. The less -opolized it is the better for society, or need I remind you that Google is presently mired in court for its conduct in its other web-adjacent monopolies such as web search and web advertising?
For the benefit of those not in the know: FLoC https://wicg.github.io/floc/
Also Chromium is open-source, and Firefox is mainly funded by Google so it not like Firefox has a real proposition value or independence.
(from a hasty scan of caniuse.com)
So your stats back up my point that Safari is a popular browser. The post I was challenging completely ignored the existence of Safari and your post backs up mine. Thanks.
https://chromewebstore.google.com/detail/open-in-firefox-bro...
What's the state with Servo?
Do I understand it correctly that Servo is the core of a browser but not the browser itself? How much work would it be to create a browser on top of Servo? And is there such a project?
They do have an official “browser”, ServoShell, which is basically a minimalistic testbed. IIRC adding tabs to it is on their roadmap.
https://developer.mozilla.org/en-US/docs/Glossary/Chrome
http://www.catb.org/%7Eesr/jargon/html/C/chrome.html
Netscape called it that and thus so does Mozilla. Try loading up `chrome://branding/content/about-logo.png` for an example of the chrome URI scheme in Firefox!
Google later appropriated also the name "Chrome Zone", for a chain of retail stores in the UK. [1]
IIRC when Chrome appeared, the name was chosen because it was a browser without chrome (ie: it used to be just the rendering windows with the url bar on top), differently from other browsers at the time.
>User interface chrome, the borders and widgets that frame the content part of a window
Neither.
> In the post-Spectre world you must have site isolation. The JS for a site (roughly, eTLD+1) must have its own OS address space separate from other sites.
Wasn't the whole point of Spectre/Meltdown to read the virtual address space of a different process?
Meltdown lets a process read from kernel memory.
There are several variations of Spectre. The first variant ("Spectre V1") lets a process read its own memory; the second variant ("Spectre V2") lets a process read another process's memory.
Web browser manufacturers seem to be focused on preventing Spectre V1, although I'm unclear on whether that's because V2 is too hard to exploit from JavaScript, is mitigated in other ways (e.g., CPU updates), etc.
Further reading: https://stackoverflow.com/q/53042230/25507, https://stackoverflow.com/q/48200753/25507, https://security.googleblog.com/2018/07/mitigating-spectre-w..., https://webkit.org/blog/8048/what-spectre-and-meltdown-mean-..., https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit...
Sorry, this does not make much sense. Why would you need a timing attack to read memory from your own address space? Just a regular code execution exploit should do it.
Here, I found the relevant info (that I was too lazy to find before I posted my first comment, apparently):
> While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs.
If you're making a VM such that the running code can only access a particular array, Spectre allows a timing attack that can get malicious code in the VM access to the full memory space.
You're right that it's not that scary for most use cases. What it really means is that it's hopeless to make memory inaccessible to a sandbox without putting a process isolation barrier betwixt the two, as there's no real way to close out all of the timing attack possibilities. In principle, if the only thing you needed to foreclose was memory vulnerabilities, then sufficiently good programming™ would let you have the sandbox in the same process space; as a matter of practice, though, anyone looking at product security seriously would still make you put in process isolation, because that kind of good programming just doesn't exist at scale yet.
(Note that Meltdown, but not Spectre, allows timing attacks that cross process isolation domains.)
Have we not learned our lesson yet, or am I misunderstanding the situation? I believe it was a Microsoft study that linked unsafe memory access to ~70% of exploit chains.
> I'm curious
Is it really curiosity though? Because the answer is straightforward, the project started as a hobby, the developer picked whatever language they were proficient in. Andreas is open with the fact that he started Serenity OS and LadyBird as a rehab project. Put too much barrier in this setting (like learning a new language and all the ecosystem) and it might not happen at all.
> Is it really curiosity though?
I'm kind of annoyed at this whole train of comments ("I'm curious..."). In so many occasion I see something cool and the main comment track is "why hasn't this been writte in rust?" (or some other allegedly safe/better programming language).
It's like seeing a beautiful painting being painted and arguing about the kind of paintbrush the painter has used.
It's so sad.
No it is not. You don't use your browser for its artistic value (which is in the eye of the beholder). You also don't make an announcement for a painting.
This is more like using a non-inox screw in a high-humidity environment. Yes, it will hold on for a while. But it is objectively a bad choice. Non-inox may had been the only choice 300 years back, but this is 2024.
And writing a browser from scratch is a huge undertaking. When you invest resources in such a project, you probably want it to be more weatherproof than its C/C++-based predecessors. So it is quite reasonable IMHO to ask why C++ was chosen for this project.
There is no good safety language right now to write a browser in, except for c/c++. Rust might sound like an alternative but it's safety story is sad when compared to c/c++.
Ada might be a cool option, so is ATS.
If this is a work of art whose code is to be admired and only viewed as a creative work, fine I’m sorry I asked the question and that it was criticism of one’s vision.
However, if this is something that is intended to be “driven” by other users in the future I think it’s a perfectly acceptable question to ask why more modern safety mechanisms are not being employed. Maybe there is a rationale I’m unaware of or a reason why those mechanisms are not employed.
I believe you are right. But I also think that your phrasing is not very nice. It shows a lack of empathy and understanding and feels entitled.
You could convey essentially the same message but be way nicer to everyone involved, and that would be way more efficient.
If your ideal is a browser engine written in a safer language, and want to work towards that goal, phrasing things the way you did in your comment is one of the worst way to do it because you risk putting off people and they will associate this bad feeling to your idea. See how people react to comments about Rust.
Some of your options are:
- writing a browser engine yourself, in a safer language
- contributing to an existing browser engine like Servo
- convince projects to switch to a safer language, or to accept contributions in a safer language
- convince someone or a group of people to do those things
- fund such an enterprise
At this point, the world needs proof that a browser engine can practically be written in something else than C++, because it's what all three major engines are written in. There are strong evidences that Rust can be an option given the existence of Servo, but look how Ladybird is progressing so absurdly faster than Servo.
If you don't work yourself towards your goal, you can only humbly share your wish.
Lobbying is fine too, but you really need to make sure you don't make others hate your idea because of the way you communicate. Specifically in this topic, you need to take in account that many people are already annoyed by the numerous "why not Rust" comments, so you are walking on eggshells.
What's more, don't forget the global picture, and that security (although critical, we agree) is only one aspect. Security is irrelevant in a project that doesn't even exist. C++ is better than other languages for a lot of reasons in other aspects and you will need to address this, in the context of writing a browser.
Good luck in your endeavors, I hope you succeed, I believe it's a good ideal.
I’m happy there is someone out there genuinely interested in creating another alternative to the near monoculture that is web browsers. I hope that they gain traction in the open source community as well as wider adoption upon maturity.
I’ll try to follow the project development and maybe I can learn myself why other languages may not be as well suited as C++. It seems that language proficiency is the most common answer I’ve received besides earned criticism of the way I formed my question.
It's like seeing a beautiful painting and realizing the painter didn't use lightfast colors. In ten years the painting will not be beautiful anymore.
Yes, the painter / author put in a lot of work, and this deserves acknowledgement. But ones decision to use it or not is not only based on the amount of work put in.
it's not - it's a stock standard way to ask a question in bad faith.
I appreciate that the author birthed the project as a way to direct his energies towards more productive means. I don’t think it’s relevant to the question though.
Perhaps it's a similar situation for Ladybird.
>Memory safety remains a relevant problem: all Chrome exploits caught in the wild in the last three years (2021 – 2023) started out with a memory corruption vulnerability in a Chrome renderer process that was exploited for remote code execution (RCE). Of these, 60% were vulnerabilities in V8.
> V8 vulnerabilities are rarely "classic" memory corruption bugs (use-after-frees, out-of-bounds accesses, etc.) but instead subtle logic issues which can in turn be exploited to corrupt memory. As such, existing memory safety solutions are, for the most part, not applicable to V8. In particular, neither switching to a memory safe language, such as Rust, nor using current or future hardware memory safety features, such as memory tagging, can help with the security challenges faced by V8 today.
Technically no. But if you can decrease those 40% where it could help you can than focus more on the logic issues. Maybe.
As for the other 60%, they point to logic errors as the root cause, but that's true of all memory corruption bugs: they wouldn't exist if there weren't logic errors behind them. The actual difference here is that the vulnerabilities are either in the machine code generated by the JIT (e.g. type confusion), rather than V8's own code; or they're in code they insist must be memory-unsafe for performance reasons.
So the takeaway there should be that JS engines for hostile code should either not use a JIT at all, nor memory-unsafe code paths, or use stronger tools to verify the correctness of the JIT and those code paths. But hey, retaining the capability to speed up bloated web apps ever so slightly is more important that
This doesn't imply, though, that another project in C++ will share these traits.
At the end of the day, LadyBird is still a hobby project, so one of the main objective is to have fun which does not always coincide with rationality (although the decision to move on from NIH[1] is a sign that this might be changing).
But it was definitely started as a hobby project so your point still stands, mostly.
(to be clear, I'm not answering to the question of which programming language should be used to write Ladybird)
> Have we not learned our lesson yet,
"We"? Do you speak in the name of the developer? What an odd choice of pronoum.
Either way, if you are adamant about writing a new browser in your "memory-safe language" of choice, be the change you want made. Go ahead and write a new browser from scratch. Show the world how it should be done.
for the same reason people still use the English language, despite being full of crazy inconsistencies and being very hard to become a native speaker, coming from another language: proficiency.
Proficiency is one of the most, if not the most, valuable metric when choosing the tool you will use to take on some complex/daunting task.
(The success of legalese is still debated, but its existence is generally accepted)
BTW Rust (or any other so called "memory safe" language) is not the equivalent of legalese, it's the equivalent of using French because it's the "language of diplomacy" (that's why many English words come from French) instead of English.
If you're not proficient in French, French legalese won't save you.
If you are not proficient in French, you likely ought not to conduct diplomacy in French.
If you are not proficient in Rust, you likely ought not to achieve memory safety by writing everything in Rust.
- French itself does not add much value to diplomacy. The reason to use is that everyone else who does diplomacy is expected to know French (and probably isn't a native speaker which makes things a bit more equal). English is probably taking over there, like it has done in other domains.
- The recent C++ versions are not actively promoting shooting yourself in the foot like older variants, but they aren't exactly trying to prevent it.
- Rust is going out of its way to prevent writing memory unsafe code. It is still possible if you know what you are doing, but just trying out stuff at random is more likely to give you a compile-time error than undefined behaviour.
- Most programmers aren't very competent, not matter what they believe about themselves. With Rust they are less likely to commit serious errors. Or get anything done, but that's a separate discussion. The French will probably point out your pronunciation mistakes too before continuing discussion, but that's also not the point here.
Funny, given that the word diplomacy is a French word, together with embassy, treaty, alliance, passport and protocol :)
> Rust is going out of its way to prevent writing memory unsafe code
But if someone is not proficient in Rust it will only slow them down and they'll end up fighting the language and the compiler instead of using the language.
It's a common complain among non Rust programmers.
> Most programmers aren't very competent
I strongly believe Andreas Kling is very competent.
For the rest of us who are not him, incompetence does not go well in hand with Rust, which is a very complex language.
EDIT: pretending that a very proficient C++ programmer will chose Rust because "it's 2024" it's the same thing as pretending that they will chose Haskell, which is equally memory safe and also equally complex.
Why nobody ever recommend Haskell or Smalltalk?
It doesn't seem much like a discussion about memory safety to me, but rather promoting Rust.
This is true. But it only tells about the cultural dominance that France had at the time the convention started. If history had happened differently, Chinese, Hindi or something else could be in similar position.
> But if someone is not proficient in Rust it will only slow them down and they'll end up fighting the language and the compiler instead of using the language.
This is indeed the choice. Make it difficult to write code but more likely that the result is correct, easy to achieve high performance but risky (C++ and similar) or just accept the overhead of checking everything over at run time (JVM and CLR languages, etc). I would say there is a niche for the first.
> Why nobody ever recommend Haskell or Smalltalk?
I think at this point it's well known that the pure functional lazy evaluation model rules out too many useful data structures and makes it easy to introduce accidental complexity. As for Smalltalk, it seems (I've never actually used it) to me that most of its once unique ideas have been copied to current mainstream languages. It also seems to have a huge number of fragmented implementations and most of them seem to have a heavy runtime virtual machine.
French is still very much relevant, but it took centuries to make it less relevant than before to the point where we are now.
Rust in comparison is minutes old and there's no evidence it will dominate the field of system programming in the future. See: Ruby on Rails for web development.
> This is indeed the choice. Make it difficult to write code but more likely that the result is correct
I don't buy it.
Making it hard to write code it's nobody's choice, it's accidental complexity, that any sane language designer would avoid if possible, because it severely hinders the language adoption.
The opposite is also true: code easy to write will also be more easily correct.
Elixir is easy to write and will almost automatically be correct in complex scenarios such as distributed systems.
> pure functional lazy evaluation model rules out too many useful data structures
Rust is Haskell with a different syntax though and makes it very hard to write simple linked lists.
> most of its once unique ideas have been copied to current mainstream languages
same happened to FP, ROR and it's happening with Rust
Even Java is more functional than ever, because it's a good paradigm, not because it's a fad.
Again: this seems to me more promoting Rust than a discussion about memory safety and I am really not interested in that. So I'll see myself out.
This is true (for some values of minute). It is also why I was suggesting that C calling convention, HTTP etc, not Rust, would be the computing lingua franca. Now that I think of it, a few years ago TCP/IP would have been on the list but now with HTTP/3 it's not that certain any more.
> Rust is Haskell with a different syntax though...
This is quite bold claim, but if you have a rigorous proof beyond "all Turing complete languages are the same" I would be interested in seeing it. It's a pity you left.
> ...makes it very hard to write simple linked lists.
This is interesting in the light of the beginning of the sentence, because in Haskell linked list is the easiest data structure. Simple linked lists aren't always that simple though. There is a reason why they used to be a recurring technical interview question.
> Again: this seems to me more promoting Rust than a discussion about memory safety
Funny, to me this seems more about promoting JIT and garbage collection. Nothing wrong with that, as long as you admit that there are niches where those are a problem but memory safety is still useful. And so far there haven't been other serious language candidates for that niche.
Has the rise of these memory safe languages caused any shift in the proficiencies of the average developer?
I see a lot of younger people gush over python early in their careers but see a lot of Java/.NET in enterprise.
I personally grew up learning Delphi, PHP, and HTML. Java and .NET came later, but I rarely had a hand in initiating the projects so my language proficiency typically flowed with the job/project I was working on professionally.
That's a good question, but I have no answer for it.
AFAIK the data is missing or is inconsistent.
But I can link to you the obligatory Ken Thompson's "three weeks away from an OS"
Are there any other languages that im missing? If ATS has a prettier syntax, it'd be my candidate for writing a web browser in.
It is Azure that is more keen in adopting memory safe languages, and has the mandate that new systems code should be done using them.
And we all know how secure the average user's Windows computer is.
And Windows' security is so good that it's Windows who's powering tens of billions of servers, smartphones, IoT, appliances, routers etc. throughout the world? Oh, wait, no... These are all running Linux.
And the uptime. Let's not forget the uptime with patch tuesday.
Windows does not strike me as the ecosystem we should strive to immitate.
https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...
Those that have glass ceilings should not throw rocks.
> Have we not learned our lesson yet
Why are you speaking like this project is asking you to write code in C++? You are free to exclusively write Rust. Other people writing C++ has 0 impact on what you're writing or what lessons you've learned.
Interpreting JavaScript at all it at all is a problem. Ladybird's LibJS compiles it to bytecode and interprets that (which is usually better than intereprting the AST). The bytecode interpreter is written in C++, and it's still pretty damn slow - websites take a long time to load, and LibJS is the main bottleneck.
The reality is, modern websites throws so much junk at your JS implementation, that you basically need to JIT-compile it in order to have any sort of reasonable performance. And with JIT all memory safety guarantees are thrown out of the window - it doesn't matter if you write your compiler in Rust, C++ or a .NET language - if there's an exploit it's disproportionately more likely to be in the output assembly than it is to be in your compiler.
Browsers nowadays make a best effort, and they have a stack of other mitigations in case the JIT leaks: https://chromium.googlesource.com/chromium/src/+/main/docs/d...
That point is 50% FUD. A language which "maintains strict memory safe contracts" but has an `unsafe` keyword; or has a "native code interface", or uses libraries implemented in a different language, doesn't really strictly maintain its guarantees. And on the other hand, a language in which you can, in principle, load and execute arbitrary code from a string you got from the user, can be hardened very well by statically-checkable constraints.
So, it's a matter of degrees rather than absolutes. If you then add considerations such as programming paradigm flexibility and performance, C++ is very much a valid choice even for the use case of a browser.
With the current state of the web, filled with mostly spam content designed to generate ad revenue and to exploit the user, accessed from mainstream browsers produced by large corporations that are increasingly trying to do the same thing, web users are being squeezed from all sides. Using the web today is a hostile experience, and the only safe haven from all this nonsense is using community-supported alternative browsers, that are really stripped down versions of mainstream ones, and relying heavily on ad, cookie, JavaScript and other blockers, which may stop working at any point. This is a difficult task only tech savvy users can realistically do, while most other users have no choice.
A new, independent, browser alone will not solve this, but it's certainly a step in the right direction.
I think the actual root of the problem is that the people and organizations developing and running the sites do want to force the ads, analytics and other things upon you and you as a user basically have to hack around that. If the users actually took a stance with something a bit like https://en.wikipedia.org/wiki/GNU_LibreJS then the sad reality would be that you just couldn't use most sites altogether.
I still need to enable things here and there but it's a fairly easy straightforward process.
Couple this with uBlock and firefox Container (where I am logged to google only for gmail in a specific container) and the web is pleasant again
What does this mean, that Ladybird will no longer run on Serenity OS? And that is up to the Serenity peeps to make it run should they wish to do so?
> Ladybird is a truly independent web browser, using a novel engine based on web standards.
So I assume they want to build a truly independent web browser instead of leaving the Browser market to Google, Apple, and Firefox (relies heavily on Google ad revenue.) and executives that run them who are primarily motivated by money.
I can think of two reasons for this
- money is involved and someone buys them out / influences the direction of the project
- the reality of the task ahead sets in the and the dev team gives up on custom engine development
For those of you coming back to this comment from the future, yep "told you so". ;-)
If you build anything that accumulates users, business interests will be aroused at some point. Business people don't give a shit about the technology itself unless of course the technology itself really is a USP and can be converted into €$. But most people (end users) don't care much about the underlying technology in the browser and whether it's open source or not, chromium or not. Only technologists care.
If you have any evidence to support this, you could become quite well-known for disproving the Lindy effect's applicability to software projects.
What is the value proposition? Is it to be another general purpose browser, so there’s more competition with Chrome / WebKit? Or to be a niche browser, that could be an alternative to Electron?
How close is it to achieving that?
It's not a business. Some people develop a browser for their own joy and now they decided to target Linux and use 3rd party libraries.
In the future it might be a real competitor to Chrome & Safari, but we're several years from that happening, if at all.
But it might not need to be. It’s nice just having a second system. I don’t know to what extent Ladybird can replace Chrome, but the issues that come with a monoculture are known. There’s probably at least some hope that Ladybird could take off once it reaches a critical mass.
Not really as long as it is funded by Google.
If it’s not about tech but rather about funding, then should we be concerned of the monoculture of American browser vendors, or English speaking browser vendors? I don’t think we need to be, but I also think the tech side is the important place to prevent a monoculture for the open development of web technologies.
Not entirely separate anymore as both make use of shared external libraries.
By doing so openly and incrementally without attachments to deadlines or an employer with specific priorities, the dev can take the time to identify inefficiencies, pain points, subtleties across the stack etc. which are then recorded in the development of the browser.
Thus, the project is not aimed to "achieve" something at some point in time; its "value proposition", if you must use this annoying term, is the development process itself.
People are cheering on it because they love the author and want a new web browser written from scratch, but practically speaking it is a web browser that is 1) written in a memory unsafe language, 2) doesn't really have any sandboxing, and 3) is highly incomplete.
There is this document here:
https://raw.githubusercontent.com/LadybirdBrowser/ladybird/d...
so there are some plans for sandboxing so that's good, but if I'm reading the code correctly (please correct me if I'm wrong) then no actual sandboxing is yet implemented on non-SerenityOS systems (e.g. there are some "pledge" calls that I can find, but it looks like it'll only work on SerenityOS?), and, if I'm being honest, this is nowhere near aggressive enough for a web browser, especially one written from scratch. If the goal was "produce the most secure web browser in the world" there's much more you could do with its architecture that even likes of Chrome won't (because of legacy considerations, and because they care a lot about how fast it runs).
But, of course, practically speaking as long as it has no market share (so no one will realistically target it) then even minimal sandboxing should be fine, and as long as the project itself doesn't pretend that it's something it is not then all is good.
I don't see this catching up any time soon to a point where you could use this and not deal with a very broken browsing experience. So this will likely stay a bit of a niche thing for quite some time. But I'm happy for people to prove me wrong.
Easy to forget that Webkit started out as a fork of Khtml when Apple embraced it for Safari. Later Google forked it off as Chrome. So, it has been done before. But it's a lot of work.