Everywhere: Abandon Swift adoption
After making no progress on this for a very long time, let's acknowledge
it's not going anywhere and remove it from the codebase.
https://github.com/LadybirdBrowser/ladybird/commit/e87f889e3...And this was me trying to use Swift for a data access layer + backend web API. There's barely any guidance or existing knowledge on using Swift for backend APIs, let alone a web browser of all projects.
There's no precedent or existing implementation you can look at for reference; known best practices in Swift are geared almost entirely towards using it with Apple platform APIs, so tons of knowledge about using the language itself simply cannot be applied outside the domain of building client-running apps for Apple hardware.
To use swift outside its usual domain is to become a pioneer, and try something truly untested. It was always a longshot.
Computer languages are the opposite of natural languages - they are for formalising and limiting thought, the exact opposite of literature. These two things are not comparable.
If natural language was so good for programs, we’d be using it - many many people have tried from literate programming onward.
I don't see a case for "complex" vs "simple" in the comparison with natural languages.
Here's a very simple, lexical declaration made more human friendly by use of the preposition `my` (or `our` if it is packaged scoped)...
my $x = 42;x := 42
Or
let x = 42
Or
x = 42
It seems like a regression from modern languages.
It's exactly the opposite. Writing and reading are asymmetrical, and that's why it's important to write code that is as simple as possible.
It's easy to introduce a lot of complexity and clever hacks, because as the author you understand it. But good code is readable for people, and that's why very expressive languages like perl are abhorred.
I 100% agree with your statement. My case is that a simple language does not necessarily result in simpler and more readable code. You need a language that fits the problem domain and that does not require a lot of boilerplate to handle more complex structures. If you are shoehorning a problem into an overly simplistic language, then you are fighting your tool. OO for OO. FP for FP. and so on.
I fear that the current fashion to very simple languages is a result of confusing these aspects and by way of enforcing certain corporate behaviours on coders. Perhaps that has its place eg Go in Google - but the presumption that one size fits all is quite a big limitation for many areas.
The corollary of this is that richness places an burden of responsibility on the coder not to write code golf. By tbh you can write bad code in any language if you put your mind to it.
Perhaps many find richness and expressivity abhorrent - but to those of us who like Larry's thinking it is a really nice, addictive feeling when the compiler gets out of the way. Don't knock it until you give it a fair try!
> Get into a rut early: Do the same process the same way. Accumulate idioms. Standardize. The only difference(!) between Shakespeare and you was the size of his idiom list - not the size of his vocabulary.
Here's a Python rut:
n = 20 # how many numbers to generate
a, b = 0, 1
for _ in range(n):
print(a, end=" ")
a, b = b, a + b
print()
Here's that rut in Raku: (0,1,*+*...*)[^20]
I am claiming that this is a nicer rut. seq = [0,1]
while len(seq) < 20:
seq.append(sum(seq[-2:]))
print(' '.join(str(x) for x in seq))
> I am claiming that (0,1,+...*)[^20] is a nicer rut.If it's so fantastic, then why on earth do you go out of your way to add extra lines and complexity to the Python?
Even though I barely know Raku (but I do have experience with FP), it took way less time to intuitively grasp what the Raku was doing, vs. both the Python versions. If you're only used to imperative code, then yeah, maybe the Python looks more familiar, though then... how about riding some new bicycles for the mind.
But there's a lot of hokey, amateurish stuff in there... with more added all the time. Let's start with the arbitrary "structs are passed by value, classes by reference." And along with that: "Prefer structs over classes."
But then: "Have one source of truth." Um... you can't do that when every data structure is COPIED on every function call. So now what? I spent so much time dicking around trying to conform to Swift's contradictory "best practices" that developing became a joyless trudge with glacial progress. I finally realized that a lot of the sources I was reading didn't know WTF they were talking about and shitcanned their edicts.
A lot of the crap in Swift and SwiftUI remind me of object orientation, and how experienced programmers arrived at a distilled version of it that kept the useful parts and rejected dumb or utterly impractical ideas that were preached in the early days.
You can do classic OOP, FP, Protocol-Oriented Programming, etc., or mix them all (like I do).
A lot of purists get salty that it doesn’t force implementation of their choice, but I’m actually fine with it. I tend to have a “chimeric” approach, so it suits me.
Been using it since 2014 (the day it was announced). I enjoy it.
They want native, partly as a “moat,” but also as a driver for hardware and services sales. They don’t want folks shrugging and saying “It doesn’t matter what you buy; they’re all the same.”
I hear exactly that, with regard to many hybrid apps.
There are plenty of valid reasons to use classes in Swift. For example if you want to have shared state you will need to use a class so that each client has the same reference instead of a copy.
This is the same way that C# works and C and C++ why is this a surprise?
Swift structs use copy on write, so they aren’t actually copied on every function call.
It’s a common misconception that comes from the standard library data structures, which almost all do implement CoW
You can also look at the source code for the language if any it’s confusing. It’s very readable.
Personally I wouldn't mind either but my point is that they probably want to cater to the average person, and not just security conscious tech savvy people, and if that's the case, then you really can't exclude FB/IG/YT and others from working properly in your browser.
Why? The average person is well served by large existing players, whereas security conscious tech people are extremely underserved and often actually willing to pay.
Once you have non-trivial network effects, you could continue to influence the ecosystem (see: MSIE, Firefox in its early days, and Google Chrome). There are probably multiple paths to this. This is one.
This kills the internet.
For the record, I don't have a dog in this fight. As long as it runs on Linux, I'm willing to test drive it when it's ready.
It's a shame, I think swift is an underappreciated language, however I understand their reasoning. I think if they tried to just use swift from the beginning it would have been too ambitious, and trying to add swift to a fragile, massive project was probably too complex.
Fascinating.
They've shown the idea it is better on C++ interop is wrong.
I don't know enough to say Rust has same OO support as Swift, but I'm pretty sure it does. (my guess as a former Swift dev: "protocol oriented programming" was a buzzy thing that would have sounded novel, but amounted to "use traits" in rust parlance)
EDIT: Happy to hear a reply re: why downvotes, -3 is a little wild, given current replies don't raise any issues.
Except the only thing that makes OOP OOP: Message passing.
Granted, Swift only just barely supports it, and only for the sake of interop with Objective-C. Still, Swift has better OO support because of it. Rust doesn't even try.
Not that OOP is much of a goal. There is likely good reason why Smalltalk, Objective-C, and Ruby are really the only OOP languages in existence (some esoteric language nobody has ever heard of notwithstanding).
Functional programming is not "encapsulating data in 'objects'". Such a model would naturally feature methods like "void Die.roll()", "void list.Add(element)" which are definitely not functional programming (which emphasizes immutability, pure functions, composition, etc.)
They are both OOP like a football can be something that you use in a sport and something that flies you to the moon. Except it is not clear what flies you to the moon goes by "football".
> Such a model would naturally feature methods like "void Die.roll()", "void list.Add(element)" which are definitely not functional programming
Exactly, functional programming. `Die` and `list` encapsulate data and use higher order functions to operate on it, which is what defines functional programming.
> which emphasizes [...] pure functions
Functions without encapsulation is imperative programming. If you introduce encapsulation then, yes, you have functional programming, just like your examples above.
Immutability is a disjoined property that can optionally apply to all of these programming paradigms. OOP, functional, and imperative programs can all be immutable, or not.
Composition and encapsulation go hand in hand, so that one is also functional programming, yes. And certainly composition is core to languages like C++/Java/Rust/etc. Naturally, them being functional languages.
To reiterate:
- Imperative programming: Data and functions are separate.
- Functional programming: Data is grouped with functions.
- Object-oriented programming: Data is grouped with message responders.
These definitions are not at odds with the discussion at hand at all. It was clearly stated that Swift has better OO support. Which is obviously true because it tries to be compatible with Objective-C, and therefore needs to have some level of OO support. That is something that Rust has no interest in doing, and rightfully so.
Your redefinition violates the claim, and therefore we can logically determine that it is not what was being used in the rest of the thread. That is, aside from the confused Rust guy that set this tangent in motion, but the remainder of this thread was merely clarifying to him what was originally intended. "He has a different definition for OOP" doesn't really help his understanding. That is what this thread branch has always been about, so it is not clear where you are trying to go with that.
So the approach of having a new language that requires a full rewrite (even with an LLM) is still a bad approach.
Fil-C likely can do the job without a massive rewrite and achieving safety for C and C++.
Job done.
EDIT: The authors of Ladybird have already dismissed using Rust, and with Servo progressing at a slow pace it clearly shows that Ladybird authors do not want something like that to happen to the project.
So long as you don't mind a 2-4x performance & memory usage cost.
Igalia had five engineers working full time who turned that science project into v0.0.1 in less than two years.
I've been programming in Rust for 5 years and I could barely understand the code in their repo. Not because it was somehow advanced but because it didn't make any sense. It felt like that with every decision they could make, they always chose the hardest way.
On the other hand, I have never done any C++ (besides few tutorials) in my life and yet I found both Serenity/Ladybird and also WebKit to be very readable and understandable.
BTW: If anyone wants to reply that Rust is different then yes, of course it is - but that's the point, if there is a language that maps nicely to your problem domain, it's also very fast, and well-understood then why the hell you'd use a language that is well-known to NOT map to OOP?
> Job done.
Seems like you forgot a few stops in your train of thought, Speed Racer.
Why not D?
> Swift is strictly better in OO support and C++ interop
So I guess from their point of view that’s why not rust.
I don’t have a horse in the race.
I was genuinely interested in why they didn’t even consider D given they already ruled out rust for those particular reasons, for which it seems D would fulfill nicely.
Hoo boy, wait until you hear about the state of C++'s library "system"
Probably the same reason why Rust is problematic in game development. The borrow checker and idiomatic Rust do not go well together with things that demand cyclic dependencies/references. Obviously there are ways around it but they're not very ergonomic/productive.
Said differently: the C++ interop did not support calling the C++ library I wanted to use, so I wrote a C wrapper.
He even made an attempt at creating his own language, Jakt, under SerenityOS, but perhaps felt that C++ (earlier with, now without Swift) were the pragmatic choice for Ladybird.
Rust was born at Mozilla, sort of. It was created by a Mozilla employee. The first "real" project to put it into action was Servo of which parts were adopted into Firefox. While Rust may not have been developed "specifically" to create a browser, it is a fair comment.
That said, Ladybird was started as part of the SerenityOS project. That entire project was built using C++. If the original goal of Serenity was to build an opeerating system, C++ would have felt like a reasonable choice at the time.
By the time Ladybird was looking for "better" languages than C++, Ladybird was already a large project and was making very heavy use of traditional OOP. Rust was evaluated but rejected because it did not support OOP well. Or, at least, it did not support integration into a large, C++ based, OOP project.
Perhaps, if Ladybird had first selected a languge to write a browser from scratch, they would have gone with Rust. We will never know,
We do know that Mozilla, despite being the de facto stewards of Rust at the time, and having a prototype web browser written in Rust (Servo), decided to drop both Rust and Servo. So, perhaps using Rust for browsers is not as open and shut as you imply.
Tools to do X better are often designed by people who get paid a lot to do X and worry about losing their job if they are not good enough at X.
If he were to tell me that he didn't imagine Rust's helping with browser dev when he designed Rust, then I'd believe him, but the "circumstantial" evidence points strongly in the other direction.
I think you are conflating the development of Servo with the design and development of Rust.
20240810 https://news.ycombinator.com/item?id=41208836 Ladybird browser to start using Swift language this fall
- Excellent for short-lived programs that transform input A to output B
- Clunky for long-lived programs that maintain large complex object graphs
- Really impressive ecosystem
- Toxic community
Lots of people seem really committed to OOP. Rust is definitely a bad fit if you can't imagine writing code without classes and objects. I don't think this makes rust is a bad language for the problem. Its just, perhaps, makes rust a bad language for some programmers.
GTK, Qt, DOM, WinUI 3, Swing, Jetpack Compose, and GWT (to name a few) all provide getters and setters or public properties for GUI state, violating the encapsulation principle [1]. The TextBox/EditBox/Entry control is the perfect example.
The impedance mismatch is that a GUI control is not an object [2]. And yet, all of the object-orient GUI examples listed implement their controls as objects. The objects are not being used for the strengths of OO, it's just an implementation detail for a procedural API. The reason these GUIs don't provide an API like shown in [1] is because it's an impractical design.
"How are you supposed to design an OO TextEdit GUI control if it can't provide a getter/setter for the text that it owns?" Exactly. You're not supposed to. OOP is not the right model for GUIs.
Ironically, SwiftUI doesn't have this problem because it uses the Elm Architecture [3] like React and iced.
[1]: https://www.infoworld.com/article/2163972/building-user-inte...
[2]: From [1], "All the rules in the rule-of-thumb list above essentially say the same thing — that the inner state of an object must be hidden. In fact, the last rule in the list (“All objects must provide their own UI”) really just follows from the others. If access to the inner state of an object is impossible, then the UI, which by necessity must access the state information, must be created by the object whose state is being displayed."
Traits give you the ability to model typical GUI OO hierarchies, e.g.:
trait Widget {
fn layout(&mut self, constraints: Constraints);
fn paint(&self, ctx: &mut PaintCtx);
fn handle_event(&mut self, event: Event);
}
struct Button { ... }
struct Label { ... }
impl Widget for Button { ... }
impl Widget for Label { ... }
let mut widgets: Vec<Box<dyn Widget>> = Vec::new();
Implementation inheritance can be achieved with good old shared functions that take trait arguments, like this: fn paint_if_visible<W>(widget: &W, ctx: &mut PaintCtx)
where
W: HasBounds + HasVisibility,
{
if widget.is_visible() {
ctx.paint_rect(widget.bounds());
}
}
You can also define default methods at the trait level.This all ends up being much more precise, clear, and strongly typed than the typical OO inheritance model, while still following a similar overall structure.
You can see real world examples of this kind of thing in the various GUI toolkits for Rust, like Iced, gpui, egui, Dioxus, etc.
> Especially because there's no GC.
This is the only real issue I can think of. However, for implementing something like a UI, automatic GC isn't really necessary because the lifetime of widgets etc. maps very well to the lexical/RAII model. Windows own widgets, etc. Again, see all the UI toolkits implemented in Rust.
Every browser in use is stuck with C++ because they're in way too deep at this point, but Chromium and Firefox are both chipping away at it bit by bit and replacing it with safer alternatives where they feasibly can. Chromium even blocked JPEG-XL adoption until there was a safe implementation because they saw the reference C++ decoder as such a colossal liability.
IMO the takeaway is that although those browsers do use a ton of C++ and probably always will, their hard-won lessons have led them to wish they didn't have to, and to write a brand new browser in C++ is just asking to needlessly repeat all of the same mistakes. Chromium uses C++ because Webkit used C++ because KHTML used C++ in 1998. Today we have the benefit of hindsight.
Quickly followed by several vulnerabilities in that reference library as well; good move
And Andreas Kling already proved the naysayers wrong when he showd that a new operating system and Web browser can be written entirely from scratch, the former not even using any standard libraries; so beware when you are inclined to say 'not feasible'.
I'm struggling to think of a larger factor
What is this mythical subset of C++? Does it include use of contemporary STL features like string_view? (Don’t get me wrong — modern STL is considerably improved, but it’s not even close to being memory-safe.)
Ladybird inherits its C++ from SerenityOS. Ladybird has an almost completely homegrown standard library including their own pointer classes and a couple of different string classes that do some interesting things with memory. But perhaps the most novel stuff are things like TRY and MUST: https://github.com/SerenityOS/serenity/blob/master/Documenta...
You see this reflected all the way back to the main function. Here is the main entry function for the entire browser:
ErrorOr<int> ladybird_main(Main::Arguments arguments).
https://github.com/LadybirdBrowser/ladybird/blob/master/UI/Q...
If Ladybird is successful, I would not be surprised to see its standard library take off with other projects. Again, it is really the SerenityOS standard library but the SerenityOS founder left the project to focus on Ladybird. So, that is where this stuff evolves now.
I can totally imagine how a prolific and ambitious developer would create a world of their own, essentially another language with domain-specific vocabulary and primitives. People often talk about using a "subset of C++" to make it manageable for mortals, and I think the somewhat unusual consideration of Swift was related to this desire for an ergonomic language to express and solve the needs of the project.
class FontFeatureValuesMapIterationSource final
: public PairSyncIterable<CSSFontFeatureValuesMap>::IterationSource {
public:
FontFeatureValuesMapIterationSource(const CSSFontFeatureValuesMap& map,
const FontFeatureAliases* aliases)
: map_(map), aliases_(aliases), iterator_(aliases->begin()) {}Ranges are not memory safe. Sorry.
Having a checklist of "things not to do" is historically a pretty in effectiveway to ensure memory safety, which is why the parent comment was asking for details. The fact that this type of thing gets dismissed as a non-issue is honestly a huge part of the problem in my opinion; it's time to move on from pretending this is a skill issue.
Moreover, Servo aims to be embeddable (there are some working examples already), which is where other non-Chrome/ium browsers are failing (and Firefox too). Thanks to this it has much better chance at wider adoption and actually spawning multiple browsers.
Alas not nearly as modularized as it could be. I think it's mainly just Stylo and WebRender (the components that got pulled into Firefox), and html5ever (the HTML parser) that are externally consumable.
Text and layout support are two things that could easily be ecosystem modules but aren't seemingly (from my perspective) because the ambition to be modular has been lost.
So, yes it is still pre-historic.
> Once it gets embedding stabilized that will be the time for a full blown browser developement.
Servo development began in 2012. [0] 14 years later we get a v0.0.1.
At this point, Ladybird will likely reach 1.0 faster than Servo could, and the latter is not even remotely close to being usable even in 14 years of waiting.
When Servo is done, it's going to be a beast.
It's getting hundreds of commits per week:
And no, they're not being disingenuous.
It's frustrating to discuss. It is a wonderful case study in how not to make engineering management decisions, and yet, they've occurred over enough time, and the cause is appealing enough, that it's hard to talk about out loud in toto without sounding like a dismissive jerk.
From what I can tell they're pretty laser focused on making a browser (even in this issue, they're abandoning Swift).
I agree with you. I also agree that this decision is an example of that.
SerenityOS had an "everything from scratch in one giant mono-repo" rule. It was, explicitly a hobby project and one rooted in enjoyment and 'idealism from the get go. It was founded by a man looking for something productive to focus on instead of drugs. It was therapy. Hence the name.
Ladybird, as an independent project, was founded with the goal of being the only truly independent web browser (independent from corporate control generally and Google specifically).
They have been very focussed on that, have not had any sacred cows, and have shed A LOT of the home-grown infrastructure they inherited from being part of SerenityOS. Sometimes that saddens me a little but there is no denying that it has sped them up.
Their progress has been incredible. This comment is being written in Ladybird. I have managed GitHub projects in Ladybird. I have sent Gmail messages in Ladybird. It is not "ready" but it blows my mind how close it is.
I think Ladybid will be a "usable" browser before we enter 2027. That is just plain amazing.
https://github.com/LadybirdBrowser/ladybird/tree/master/UI/A...
Swift is a poorly designed language, slow to compile, visibly not on path to be major system language, and they had no expert on the team.
I am glad they are cutting their losses.
Also funny enough, all cross platform work is with small work groups, some even looking for funding … anyway.
Apple has been always 'transactional' when it comes to OSS - they open source things only when it serves a strategic purpose. They open-sourced Swift only because they needed the community to build an ecosystem around their platform.
Yeah, well, sure they've done some work around LLVM/Clang, WebKit, CUPS, but it's really not proportional to the size and the influence they still have.
Compare them to Google, with - TensorFlow, k8s, Android (nominally), Golang, Chrome, and a long tail of other shit. Or Meta - PyTorch and the Llama model series. Or even Microsoft, which has dramatically reversed course from its "open source is a cancer" era (yeah, they were openly saying that, can you believe it?) to becoming one of the largest contributors on GitHub.
Apple I've heard even have harshest restrictions about it - some teams are just not permitted to contribute to OSS in any way. Obsessively secretive and for what price? No wonder that Apple's software products are just horrendously bad, if not all the time - well, too often. And on their own hardware too.
I wouldn't mind if Swift dies, I'm glad Objective-C is no longer relevant. In fact, I can't wait for Swift to die sooner.
My understanding is that by default you are not allowed to contribute to open-source even if its your own project. Exceptions are made for teams whose function is to work on those open-source project e.g. Swift/LLVM/etc...
I can recall one explaining to me in the mid 20 teens that the next iPhone would be literally impossible to jailbreak in any capacity with 100% confidence.
I could not understand how someone that capable(he was truly bright) could be that certain. That is pure 90s security arrogance. The only secure computer is one powered off in a vault, and even then I am not convinced.
Multiple exploits were eventually found anyway.
We never exchanged names. That’s the only way to interact with engineers like that and talk in real terms.
I suspect it's not just Apple, I have "lost" so many good GitHub friends - incredible artisans and contributors, they've gotten well-payed jobs and then suddenly... not a single green dot on the wall since. That's sad. I hope they're getting paid more than enough.
I won't say never, but it would take an exceedingly large comp plan for me to sign paperwork forbidding me from working on hobby projects. That's pretty orwellian. I'm not allowed to work on hobby projects on company time, but that seems fair, since I also can't spend work hours doing non-programming hobbies either.
Sort of an exception that proves the rule. Yes, it's great and was released for free. But at least partially that's not a strategic decision from Apple but just a requirement of the LGPLv2 license[1] under which they received it (as KHTML) originally.
And even then, it was Blink and not WebKit that ended up providing better value to the community.
[1] It does bear pointing out that lots of the new work is dual-licensed as 2-clause BSD also. Though no one is really trying to test a BSD-only WebKit derivative, as the resulting "Here's why this is not a derived work of the software's obvious ancestor" argument would be awfully dicey to try to defend. The Ship of Theseus is not a recognized legal principle, and clean rooms have historically been clean for a reason.
Apple is (was?) good at hardware design and UX, but they pretty bad at producing software.
I haven’t used it in a couple decades, but I do remember it fondly. I also suspect I’d hate it nowadays. Its roots are in a language that seemed revolutionary in the 80s and 90s - Smalltalk - and the melding of it with C also seemed revolutionary at the time. But the very same features that made it great then probably (just speculating - again I haven’t used it in a couple decades) aren’t so great now because a different evolutionary tree leapfrogged ahead of it. So most investment went into developing different solutions to the same problems, and ObjC, like Smalltalk, ends up being a weird anachronism that doesn’t play so nicely with modern tooling.
I think it's a great language! As long as you can tolerate dynamic dispatch, you really do get the best of C/C++ combined with its run-time manipulable object type system. I have no reason to use it for more code than I have to, but I never grimace if I know I'm going to have to deal with it. Method swizzling is such a neat trick!
And, much like what happened to GOTO 40 years ago, language designers have invented less powerful language features that are perfectly acceptable 90% solutions. e.g. nowadays I’d generally pick higher order functions or the strategy pattern over method swizzling because they’re more amenable to static analysis and easier to trace with typical IDE tooling.
I like monkey patching for testing legacy code. I like it less as a thing in production code because it can become a security and reliability problem.
What has had the staying power is the API because that API is for an operating system that has had that staying power. As you hint, the macOS of today is simply the evolution of NeXTSTEP (released in 1989). And iOS is just a light version of it.
But 1989 is not all that remarkable. The Linux API (POSIX) was introduced in 1988 but started in 1984 and based on an API that emerged in the 70s. And the Windows API goes back to 1985. Apple is the newest API of the three.
As far as languages go, the Ladybird team is abandoning Swift to stick with C++ which was released back in 1979. And of course C++ is just an evolution of C which goes back to 1972 and which almost all of Linux is still written in.
And what is Ladybird even? It is an HTML interpretter. HTML was introduced in 1993. Guess what operating system HTML and the first web browser was created on. That is right...NeXTSTEP.
I actually suspect that ObjC and the NeXT APIs played a big part in that success. I know they’ve fallen out of favor now, and for reasons I have to assume are good. But back in the early 2000s, the difference in how quickly I could develop a good GUI for OS X compared to what I was used to on Windows and GNOME was life changing. It attracted a bunch of developers to the platform, not just me, which spurred an accumulation of applications with noticeably better UX that, in turn, helped fuel Apple’s consumer sentiment revival.
The constantly moving target fetish coming from the javascript camp may be misguided, you know...
Objective-C was actually created by a company called Stepstone that wanted what they saw as the productivity benefits of Smalltalk (OOP) with the performance and portability of C. Originally, Objective-C was seen as a C "pre-compiler".
One of the companies that licensed Objective-C was NeXT. They also saw pervasive OOP as a more productive way to build GUI applications. That was the core value proposition of NeXT.
NeXT ended up basically taking over Objective-C and then it became of a core part of Apple when Apple bought NeXT to create the next-generation of macOS (the one we have now).
So, Objective-C was actually born attempting to "use standards" (C instead of Smalltalk) and really has nothing to do with Apple culture. Of course, Apple and NeXT were brought into the world by Steve Jobs
What language would you have suggested for that mission and that era? Self or Smalltalk and give up on performance on 25-MHz-class processors? C or Pascal and give up an excellent object system with dynamic dispatch?
I haven’t used it in Apple’s ecosystem, so maybe I am way off base here. But it seems to me that it was Apple’s effort to evolve the language away from its systems roots into a more suitable applications language that caused all the ugliness.
So they already got what they wanted without inventing a new language. There must be some other reason.
[0] https://atp.fm/205-chris-lattner-interview-transcript#swiftc...
On the second part, I think the big thing was that they needed something that would interop with Objective-C well and that's not something that any language was going to do if Apple didn't make it. Swift gave Apple something that software engineers would like a ton more than Objective-C.
I think it's also important to remember that in 2010/2014 (when swift started and when it was released), the ecosystem was a lot different. Oracle v Google was still going on and wasn't finished until 2021. So Java really wasn't on the table. Kotlin hit 1.0 in 2016 and really wasn't at a stage to be used when Apple was creating Swift. Rust was still undergoing massive changes.
And a big part of it was simply that they wanted something that would be an easy transition from Objective-C without requiring a lot of bridging or wrappers. Swift accomplished that, but it also meant that a lot of decisions around Swift were made to accommodate Apple, not things that might be generally useful to the lager community.
All languages have this to an extent. For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory. This decision probably isn't what would be the best for most of the non-Google community, but it's also something that could be reconsidered in the future since it's an implementation detail rather than a language detail.
I'm not sure any non-Apple language would have bent over backwards to accommodate Objective-C. But also, what would Apple have chosen circa-2010 when work on Swift started? Go was (and to an extent still is) "we only do things these three Googlers think is a good idea", Go was basically brand-new at the time, and even today Go doesn't really have a UI framework. Kotlin hadn't been released when work started on Swift. C# was still closed source. Rust hadn't appeared yet and was still undergoing a lot of big changes through Swift's release. Python and other dynamic languages weren't going to fit the bill. There really wasn't anything that existed then which could have been used instead of Swift. Maybe D could have been used.
But also, is Swift bad? I think that some of the type inference stuff that makes compiles slow is genuinely a bad choice and I think the language could have used a little more editing, but it's pretty good. What's better that doesn't come with a garbage collector? I think Rust's borrow checker would have pissed off way too many people. I think Apple needed a language without a garbage collector for their desktop OS and it's also meant better battery life and lower RAM usage on mobile.
If you're looking for a language that doesn't have a garbage collector, what's better? Heck, what's even available? Zig is nice, but you're kinda doing manual memory management. I like Rust, but it's a much steeper learning curve than most languages. There's Nim, but its ARC-style system came 5+ years after Swift's introduction.
So even today and even without Objective-C, it's hard to see a language that would fit what Apple wants: a safe, non-GC language that doesn't require Rust-style stuff.
Do you have a source for this?
C# has a copying GC, and easy interop with C has always been one of its strengths. From the perspective of the user, all you need to do is to "pin" a pointer to a GC-allocated object before you access it from C so that the collector avoids moving it.
I always thought it had more to do with making the implementation simpler during the early stages of development, with the possibility of making it a copying GC some time in the feature (mentioned somewhere in stdlib's sources I think) but it never came to fruition because Go's non-copying GC was fast enough and a lot of code has since been written with the assumption that memory never moves. Adding a copying GC today would probaby break a lot of existing code.
My read is that that this is the main thing that happened here.
The Ladybird team is quite pragmatic. Or, at least their founder is. I think they understood the scale of buidling a browser. They understood that it was a massive undertaking. They also understood the security challenge of building an application so big that processes untrusted input as its main job. So, they thought that perhaps they needed a better language than C++ for these reasons. They evaluated a few options and came away thinking that Swift was the best, so they announced that they expected to move towards Swift in the future.
But the other side of being pragamatic is that, as time passed, they also realized how hard it would be to chagne horses. And they are quite productivity driven. They produce a YouTube video every month detailing their progress including charts and numbers.
Making no progress on the "real" job of building a browser to make progress on the somewhat artificial job of moving to a new programming language just never made sense I guess.
And the final part of being pragmatic is that, after months of not really making any progress on the switch, the founder posted this patch essentially admitting the reality and suggesting they just formalize the lack of progress and move on.
The lack of progress on Swift that is. Their progress on making a browser has been absolutely mind-blowing. This comment is being written in Ladybird.
And someone will be stuck not to do anything because they are unsatisfied with all languages. :-)
So I think it probably meets the threshold of "toxic", but more importantly - it's not effective. Everyone and their dog has already heard of Rust, aggressive proselytising is not going to help drive Rust adoption, it's just pissing people off
The rust thing is more like actual religious evangelism, like having Jehovah's Witnesses knocking on the door. They have seen the light and you haven't
- predictable
- incessant
- with belittling/reductionist/arrogant/elitist/combative phrasings
- handwaving rust shortcomings and tradeoffs
Time has been spend, yes. But the topic at hand is after so much time the conclusion to abandon this path is justified.
Edit: I explained my position better below.
LLVM: Pretty much everyone who has created a programming language with it has complained about its design. gingerbill, Jon Blow, and Andrew Kelley have all complained about it. LLVM is a good idea, but it that idea was executed better by Ken Thompson with his C compiler for Plan 9, and then again with his Go compiler design. Ken decided to create his own "architecture agnostic" assembly, which is very similar to the IR idea with LLVM.
Swift: I was very excited with the first release of Swift. But it ultimately did not have a very focused vision outlined for it. Because of this, it has morphed into a mess. It tries to be everything for everyone, like C++, and winds up being mediocre, and slow to compile to top it off.
Mojo isn't doesn't exist for the public yet. I hope it turns out to be awesome, but I'm just not going to get my hopes up this time.
LLVM is
- Slow to compile
- Breaks compilers/doesn't have a stable ABI
- Optimizes poorly (at least, worse than GCC)
Swift I never used but I tried compiling it once and it was the bottom 2 slowest compiler I ever tested. The only thing nearly as bad was kotlin but 1) I don't actually remember which of these are worse 2) Kotlin wasn't meant to be a CLI compiler, it was meant to compile in the background as a language server so it was designed around thatMojo... I have things I could say... But I'll stick to this. I talked to engineers there and I asked one how they expected any python developers to use the planned borrow checker. The engineer said "Don't worry about it" ie they didn't have a plan. The nicest thing I can say is they didn't bullshit me 100% of the time when I directly asked a question privately. That's the only nice or neutral thing I could say
I suggest you ask around to see what the consensus is for which compiler is actually mature. Hint: for all its warts, nobody is writing a seriously optimized language in any of the options you listed besides LLVM.
You could imagine creating a modified Go assembler that is more generic and not tied to Go's ABI that could accomplish the same effect as LLVM. However, it'd probably be better to create a project like that from scratch, because most of Go's optimizations happen before reaching the assembler stage.
It would probably be best to have the intermediate language that QBE has and transform that into "intermediate assembly" (IA) very similar to Go's assembly. That way the IL stage could contain nearly all the optimization passes, and the IA stage would focus on code generation that would translate to any OS/arch combo.
I don't think that's true. Zig have a cross-compiler (that also compiles C and C++) based on LLVM. I believe LLVM (unlike gcc) is inherently a cross-compiler, and it's mostly just shipping header files for every platform that `zig cc` is adding.
What I will say is that it seem popular to start with LLVM and then move away from it. Zig is doign that. Rust is heading in the direction perhaps with Cranelift. It feels that, if LLVM had completely nailed its mission, these kinds of projects would be less common.
It is also notable that the Dragonegg project to bring GCC languages to LLVM died but we have multiple projects porting Rust to GCC.
The latter is definitely a defining capability of Anders Hejlsberg. (C#/Typescript designer)
I don't have an emotional reaction to this, i.e. I don't think you're being mean, but it is wrong and reductive, which people usually will concisely, and perhaps reductively, describe as "mean".
Why is it wrong?
LLVM is great.
Chris Lattner left Apple a *decade* ago, & thus has ~0 impact or responsibility on Swift interop with C++ today.
Swift is a fun language to write, hence, why they shoehorned it in, in the first place.
Mojo is fine, but I wouldn't really know how you or I would judge it. For me, I'm not super-opinionated on Python, and it doesn't diverge heavily from it afaik.
I was hired around Google around the same time, but not nearly as famous :)
AFAICouldT it was a "hire first, figure out what to do later", and it ended up being Swift for TensorFlow. That went ~nowhere, and he left within 2 years.
That's fine and doesn't reflect on him, in general, that's Google for ya. At least that era of Google.
If he's such a horrible engineer then we should have lots of LLVM replacements, right?
It's a cool project and I'd consider it for a toy language but it's far from an LLVM replacement.
And if the C compiler you use is clang then you're still literally making use of LLVM.
What are other projects trying something similar that deserve attention?
But you should perhaps give your attention to Servo. They were founded as a project to write a modern browser in Rust. So, no hand-waving there.
No hand-waving on the Ladybird team either in my opinion. They have very strong technical leadership. The idea that building a massive application designed to process untrusted user input at scale might need a better language than C++ seems like a pretty solid technical suggestion. Making incedible progress month after month using the language you started with seems pretty good too. And deciding, given the progress buidling and the lack of progress exploring the new language, that perhaps it would be best to formally abandon the idea of a language switch...well, that seems like a pretty solid decision as well.
At least, that is my view.
Oh, and I was a massive Servo fan before the Ladybird project even began. But, given how much further Ladybird has gotten than Servo has, despite being at it for less time and taking on a larger scope...well, I am giving my attention to Ladybird these days.
This comment was written in Ladybird.
Carefully making decisions and then reassessing those choices later on when they prove to be problematic is the opposite of handwavey...