The ones that stand out the most to me are C# and Typescript.
Microsoft has a large team dedicated towards improving these languages constantly and instead of exclusively focusing on making them easier to use or more performant, they are constantly adding features. After all, it is their job. They are incentivized to keep making it more complex.
The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.
One of the reasons I have so much fun working in node/Javascript these days is because it is simple and not much has changed in express/node/etc for a long time. If I need an iterable that I can simply move through, I just do `let items = [];`. It is so easy and hasn't changed for so many years. I worry that we eventually come out with a dozen ways to do an array and modern code becomes much more challenging to read.
When Typescript first came out, it was great. Types in Javascript are something we've always wanted. Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
This is probably just old man ranting, but I think there's something there. The old version I used to debate about was C vs C++. Now look at modern C++, it's crazy powerful but so jam packed that many people have just gone back to C.
It has 3 ways to declare functions, multiple variations on arrow functions syntax, a weird prototyping inheritance system, objects you can create out of "new" on functions, object literals that can act an pseudo-classes, classes, decorators, for-i loop + maps + filter + for-in loop (with hasOwn) + forEach, async / await + promises and an invisible but always-on event loop, objects proxies, counter-intuitive array and mapping manipulations, lots of different ways to create said arrays and mappings, very rich destructuring, so many weirdnesses on parameter handling, multiple ways to do imports that don't work in all contexts, exports, string concatenation + string interpolation, no integer (but NaN), a "strict mode", two versions of comparison operators, a dangerous "with" keyword, undefined vs null, generators, sparse arrays, sets...
It also has complex rules for:
- scoping (plus global variables by default and hoisting)
- "this" values (and manual binding)
- type coercion (destroying commutativity!)
- semi-column automatic insertion
- "typeof" resolution
On top of that, you execute it in various different implementations and contexts: several browser engines and nodejs at least, with or without the DOM, in or out web workers, and potentially with WASM.
There are various versions of the ECMA standard that changes the features you have access to, unless you use a transpiler. But we don't even touch the ecosystem since it's about the language. There would be too much to say anyway.
There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.
But then again the newer features they do make writing code a lot nicer, giving more compile time analysis warnings etc hopefully resulting in slightly better code. And the new features also enabled a lot of performance improvements in the runtime which is nice. .NET 2/4 wasn't all that fast, .NET 8 can be a lot faster.
Further, indeed the newer JS features do in fact give you better compile time analysis, warnings, etc, and result in slightly better code.
I've seen developers make complete messes of codebases that when using modern JS features would be mostly trivial, and they hide behind The Good Parts to justify it. And this includes suggesting that classes are somehow bad, and avoiding them in favor of POJOs and messily bound functions is preferrable despite JS not receiving a dedicated class concept until years after The Good Parts was published...
Javascript seems much, much, much closer to Lisp than to Smalltalk. Granted, all three are very dynamic, but message passing needs to be bolted onto javascript. Meanwhile pretty much all of lisp is included "for free" (...via some of the ugliest syntax you've ever used).
Right around when I started using it (mid 2019) there was a bunch of V3 releases that each on it's own might've not seemed like much but they all improved small parts of the engine that made it easy to get typing on most of your code if using a functional style without adding maybe more than a few type declarations and some functions typings.
The Crockford crowd would like us to live in a world of ES5 as if that's some kind of badge of pride, while justifying it with a warcry of "functional", while breaking the preconceptions of functional programming all throughout.
Personally I prefer neither prototypal or classes, 90% of the time you just want the interfaces, unions or inferred types and the few places where you actually want inheritance and/or object methods you really are just better off with a factory method that then creates a literal that is used or suits an interface.
I'm not inclined to use a language that can't be fixed.
For example, it has different ways to declare functions because assignment is generally consistent (and IMO easy to understand) and the "simplicity" of Javascript allows you to assign an anonymous function to a variable. However, you can also use a standard function declaration that is more classic.
But I do understand what you're saying. If anything, I think it's generally in agreement with my feelings of "Javascript doesn't need to be more complex."
> There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.
This is hilarious and probably true. I think I am the former since I've been working with it for 20+ years, but I also think there's a reason it's the go-to bootcamp language alongside Python.
But I train beginners in JS and boy do I have to keep them in check. You blink and they shoot their foot, give the pieces to a dog that bite them then give them rabbies.
In some respects I think if there were a well defined "Typescript, The Good Parts" I would happily migrate to that.
I do wonder if there will, one day, be a breaking fork of JavaScript that only removes things. Maybe a hypothetical "super strict" might do the job, but I suspect the degree of change might not allow "super strict" code interacting with non "super strict" easily.
BiteCode_dev has provided a pretty good summary of a lot of the issues. A lot of them have easy fixes if you are prepared to make it a breaking change.
[0]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Around half of that list is things added later that were supposed to make the language easier to use.
I sure would like a real "Good Parts" series of books.
it's because people are talking past each other, and that's because people are using language wrong, and are merely talking past each other. The word simple is often used to mean "easy" or "familiar".
Simple is very different from easy, and familiar things are easy but doesn't have to be simple at all.
javascript is not simple, but it is easy.
There is no wrong use of language; there's just people who don't bother to communicate well in the most effective language available. In this case you could simply cohere the two viewpoints since you have insight rather than blaming one party and calling them wrong (...which is wrong).
Thy can'n't sirus be. language works only farso as withbreathings we follow, Leading paths through gardens means fail, and bloom'st chaos wear not!
what's more effective than having a pre-defined term be what it means, rather than what the speaker intends internally?
Trying to understand the speaker, presumably, and not wielding your pet definition like a semantic argument. It's fundamentally boring conversation with no benefit to either party and it makes you look like an illiterate ass.
There is a reason we ignore a good chunk of the language to be productive with it.
Javascript breathed it's last breath the moment someone saw NestJS and said "wow that's a good idea".
I still don’t understand how someone looked at Spring and thought “Wow, that’s pretty good! I’ll bring it to platform that has worse performance than Java, to language that was designed with dynamicity in mind and has no native static typing”.
You take slow framework, like Spring, and put it on slower runtime (Node) so you get double slow with less benefits.
Sorry, what?
https://journal.stuffwithstuff.com/2013/07/18/javascript-isn...
People have been going all SICP (Abelson/Sussman) on JS ever since Crockford exposed the hidden scheme (or hidden not-scheme-at-all, if you insist) and moved JS far, far away from the humble prototype OOP it started as. And that had little to do with any language extensions that had been creeping in very slowly given the lowest common denominator nature of web development, and everything with the funky scope binding tricks that generations of programmers had been taught in the memorable "let's make Scheme OOP" drills of SICP and the MIT course (that so many other universities based their teaching on)
The same can be said for most languages, even assembly language, and especially so for C++.
That said, I hate the constant stuffing of features (though not this one which is much needed), more stuff around JS like WebComponents, or CSS adding a ton of sugar.
I prefer languages with a small instruction set, as then you can learn all you can do in the language and hold it in your head. JavaScript used to have a small instruction set, I don't feel it does any longer.
Aside from this I don't know that I see any benefit to these structs, although perhaps that is just the article doing that whole trying to write JavaScript like Java thing that making classes and constructors enabled.
Java doesn't have unsigned integer types because that is "simpler" but that doesn't remove the need to deal with unsigned integers in file formats and network protocols. But now you have to do a convoluted mess of code to deal with that. I'll take a complex language that solves real problems over a "simple" language any day.
It feels like any old language gets this way...
``` function foo () {} const foo = () => {} ```
function x() {/* ... */}
const x = function() {/* ... */}
const x = function foo() {/* ... */}
const x = (function() {/* ... */}).bind(this)
const x = (function foo() {/* ... */}).bind(this)
const x = () => {/* ... */}
const x = () => /* ... */
After all, you can add a `*` to any existing function without a change in the function or its callers.
() => { return 1; }
() => 1
const foo = (function() {}).bind(this);
const foo = () => {};
Edit: And speaking of assignment versions, there's a new comment that adds a third of the same. I kinda get the feeling a lot of the "multiple ways to declare functions" is just people who don't understand how the pieces of javascript fit together and think these are all independent. They're not. Just declaring a function has only a few ways, but declaring a function and giving it a name multiplies that out to some extent.In javascript, functions are first-class objects: they can be assigned to variables and passed around just like numbers or strings. That's what everything except "function foo() {}" is doing.
The web development community has created the perfect environment for nobody to ever get any work done while still feeling like they're being productive because they're constantly learning minutiae.
> There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.
There is only one reason to believe JS is simple: because you don't know enough.
But boy does it all get confusing.
> Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
I'm not so sure about that. I think we end up consuming a lot of these features in the TS types that get published alongside libraries. We just don't know it, we just get surprisingly intuitive type interfaces.
So I also thought. And then I recently learned that typescript uses `var` internally for performance.
From src/compiler/checker.ts:
// Why var? It avoids TDZ checks in the runtime which can be costly.
// See: https://github.com/microsoft/TypeScript/issues/52924
/* eslint-disable no-var */
var deferredDiagnosticsCallbacks: (() => void)[] = [];
if (...) var x = ...
else x = ...
/////
try { var x = ...}
catch (error) { x = ... }
/////
for (...) {
var x: NodeJS.Dict<any> = {}
x[key] = ...
}return x
Really, who thought it was a good idea that finalization and error handling blocks must have no access to their subject scope? Every damn language copies that nonsense, except for js and its `var` hoisting.
All it does is moving the declaration to the correct visual scope, instead of a dangling up-front declaration.
Admittedly, I understand most coders are aready trained to read the latter.
Very true. As a downstream consumer, I can do all business logic in ancient, simple languages. But I'm sure these things are extremely nice to have for the more complicated upstream dependencies I rely on.
Such as for instance making 'var' not work in class declarations.
I am going to continue to use var for everything, because I think let and const are stupid.
It is not cool or interesting to learn about new scoping rules introduced by let, and it is not cool or interesting that so many people — especially juniors, but not exclusively — are lulled into a false sense of security by believing const means the referenced value is immutable, which it isn't.
I am going to continue to write small JavaScript, like from The Good Parts.
Naw, var has function scope and hoisting, both of which are useful.
People don't really get better at handling the complexity of large code bases. We are fundamentally the same organic matter that existed prior to the first computer coming into existence. So as code bases and library bases grow larger and larger, they need to be proportionately easier to read or even ignore.
Your code needs to be dead boring 90% of the time, otherwise you're imposing on your coworkers. And using variables before they're declared is just shitty behavior.
I also prefer boring code, but I think having the choice between var, let, and const is less boring than only having var.
Imagine you are a C# programmer just as C# 1.0 is released. C# is a fairly simple language at that time (and similar to other languages you already know), so you can get caught up on it fairly easily and quickly. A few years later, C# 2.0 comes out. It's got a handful of features, but not too much for you to absorb. Likewise C# 3.0, 4.0, etc. As long as you stay on the C# train, the rate of new features does not exceed the rate that you can learn them.
Years later, another person comes along and is new to C#, which is now at version 5.0. They are presented with a huge sprawling language and they have to learn nearly all of it at once to deal with codebases they are contributing to. It's a nightmare. They long for a language that's actually, you know simple.
So maybe they find some other newer language, Foo, which is at 1.0. It's small and they learn the whole thing. After a couple of years of happy productive use, they realize they would be a little more happy and productive if Foo had just one or two extra little features. They put in a request. The language team wants happy users so they are happy to oblige. The user is easily able to learn those new features. And maybe some other Foo users want other new things. 2.0 comes out, and they can keep up. They can stay on the train with 3.0, 4.0, etc.
They never explicitly asked for a complex language, but they have one and they're happy, because they've mastered the whole thing over a period of years. They've become part of the problem that bothered them so much years ago.
Fundamentally, the problem is that existing users experience a programming language as the delta between the latest version and the previous one. New users experience a programming language as the total sum of all of its features (perhaps minus features it has in common with other languages you already know). If you assume users can absorb information at a certain fixed rate, it means those two cohorts have very different needs and different experiences.
I don't think there's a silver bullet. The best you can hope for is that a language at 1.0 has as few bad ideas as possible. But no one seems to have perfect skill at that.
A lot of people seem to think that the overall size and "complexity" of the language (and only the language) matters? Personally I don't think it matters how long the spec is if you and your team aren't using those features. The ecosystem matters more. "What should I use to write a GUI in C#?" is a complicated question with tradeoffs, but none of them have anything to do with the language per se.
Nothing is going to compete with C++'s template system for complexity, though.
IMO, that's even worse.
It means that when you want to learn C#, you're also forced into learning a complicated tool that isn't really useful for much else.
At least when I'm learning Rust or Typescript, I can keep using my existing editor.
> A lot of people seem to think that the overall size and "complexity" of the language (and only the language) matters? Personally I don't think it matters how long the spec is if you and your team aren't using those features.
That works until you have to use code that does use those features.
> The ecosystem matters more. "What should I use to write a GUI in C#?" is a complicated question with tradeoffs, but none of them have anything to do with the language per se.
That's fair. At least to an extent.
The further you stray from the ecosystem's intended use cases, the more you have to depend on the quality of the language itself. Thankfully, for mature, mainstream languages like C#, there are a lot of things you can do before that point.
> IMO, that's even worse.
To be fair, the more accurate way to phrase it is "disappears into .NET tooling". Because this part is also exposed through standard CLI of .NET, and isn't Visual Studio specific. Managing packages through npm and dotnet is quite similar, with a significant difference that the average dependency graph of a .NET application or a package is 10 to 100 times smaller than the one of Nodejs, and the compatibility breaks happen much, much more rarely.
> It means that when you want to learn C#, you're also forced into learning a complicated tool that isn't really useful for much else.
This is untrue. On top of Visual Studio, your choices are Rider, VS Code and VSCodium, Neovim and Emacs, and anything else that integrates through VSC extension bridges, LSP and debugger adapter protocols.
I also usually recommend to all newcomers to start with CLI to manage projects and packages, because it's more straightforward than navigating through all sorts of windows in an IDE, and because they also get know the basics .NET builds on top of. It's an experience that is very similar to using Cargo.
Incoherence through gradual accretion of complexity is the probably fate of most non-trivial systems, beyond just programming languages. Individual programs, certainly. Buildings too. (People?)
Also, I am a big fan of your books, Bob! Thank you! :)
There’s more than one language that I initially disliked, and only learned to like after some of (what I saw as) the glaring flaws were fixed. After they added more features!
For one, Objective-C. I didn’t like it at all until they added ARC, removing the need for manual addRef / release everywhere. After that, coming to the language pretty late, I came to like Obj-C quite a lot.
JavaScript is another one. For a long time I thought it was an awful language and avoided using it, but a few things have brought me round and now I really like it:
- modules
- async/await
- TypeScript, if you’ll allow that as a “JavaScript feature”
I even quite like JS classes, although I could live without them.
Simplicity is good, but power and expressiveness are also good.
The new-ish yearly release cycle I think is mostly to blame, they feel like they need to add some headline features every year but the team also, maybe due to org-chart politics, seems to not really able to make deep runtime level changes that are needed to actually add anything useful so they just add syntax sugar every year bloating the language.
A lot of stuff is also designed to be independent of library changes - IIRC for example if you use nullability, the compiler will emit the Nullable attribute's definition into your .dll as a hidden class, so that your library will work even on older versions of the runtime with older base class libraries. Doing this complicates the compiler (and adds a tiny, tiny amount of bloat to your dll) but means that more people can adopt a new feature without having to think about upgrading their SDK or runtime.
My personal opinion is that if a change can be done adequately entirely at the compiler level without runtime/library changes, it should be done there. It allows the people working on the language, libraries and runtime to iterate independently and fix problems without having to coordinate across 3 teams.
I upgraded to .NET 8 recently and I love primary constructors. I don't use them everywhere but they are great for dependency injection or for small classes.
https://mareks-082.medium.com/dark-side-of-the-primary-const...
Let's just say they could have done a much better job on it. It feels rushed and haphazard for such a mature language.
I think, as a feature, this is sort of the MVP. They could have done a better job of it by adding more to it (e.g. maybe allow the readonly modifier on the constructor properties). It's hard to imagine them being able to take anything away from primary constructors that would make it better.
We try really hard. I'm always worried about pouring too much complexity in and alienating new users. At the same time, we also want to serve existing users who really do benefit from new features. It's a tricky balancing act.
I think the only way to address what you're alluding to is to continually deprecate small parts of the language, so that upgrading is manageable for active codebases. And you probably have to be really aggressive about pushing this forward, because there will always be excuses about why you should hold back just this one time and this one feature is an exception that needs to be held back just a little bit longer.
But in the long run, if you don't force people to change a little bit continuously, it will become a big enough issue to split the ecosystem. See python2 to python3. Or you end up forced into supporting bad practices for all eternity, like C++. And having to take them into account for every. Single. New. Feature.
Further raising the barrier to entry to participation in developing the language to people who are completely focused on its development and have unusual mastery of it, who can't identify with the people struggling with its complexity.
If not at the technical level, then at the business level, where people definitely don't have the time to understand why it'd be safer for the go-to heap allocation method should return a scoped pointer instead of a raw pointer.
Unfortunately, this probably is only viable for strongly-typed languages like C#; for loosely-typed languages like Python, the risk of iterative changes is that if someone moves a codebase several generations ahead at once, it'll introduce lots of subtle changes that'll only become obvious once certain codepaths are exercised...and given how bad testing coverage is for a lot of software, that probably risks breakages only occurring once it's deployed, that are nontrivial to discern via reviews or even static analysis.
Did you see https://news.ycombinator.com/item?id=41788026 ?
"My concern, and IMO what should be the overwhelming concern of the maintainers, is not the code that is being written, or the code that will be written, but all the code that has been written, and will never be touched again. A break like this will force lots of python users to avoid upgrading to 3.17, jettison packages they may want to keep using, or deal with the hassle of patching unmaintained dependencies on their own.
For those Python users for whom writing python is the core of their work that might be fine. For all the other users for whom python is an foreign, incidental, but indispensable part of their work (scientists, analysts, ...) the choice is untenable. While python can and should strive to be a more 'serious', 'professional' language, it _must_ have respect and empathy for the latter camp. Elevating something that should be a linter rule to a language change ain't that."
Strongly phrased, but planned obsolescence in a language is really expensive. You're basically quietly rotting the work of your users, and they will hate you for it.
I note that C# basically hasn't deprecated any of the language itself, the dotnet core transition was a big change to the runtime. And that was expensive enough, again due to dropping a lot of old libraries.
I never got this argument, personally. Sure, having to rip out a bunch of code because of lang-level changes can suck, but you also only really need to do that if you're starting a new project anyway or for whatever reason want to keep being pinned to @latest version of the lang.
If you're a researcher who uses Python 2 as a means to an end, then just stick to Py2. It's unreasonable to expect the entire python world to freeze in place so you don't have an annoying migration journey. If you need the latest Py3 features, them's just the brakes I'm afraid, eventually APIs need to change.
People get paid to keep running what already exists, not to write new stuff.
Usually new stuff only comes to be if there is a new product being added into the portfolio, and most of the time it comes via an aquisition or external contractors, not new development from scratch in a cooler version of the stuff they are using.
TypeScript today can be written the same way that TypeScript was when it first started to become popular. Yes there are additions all the time, but most of them are, as you observe, irrelevant to you. They're there to make it possible to type patterns that would otherwise be untypeable. That matters for library developers, not so much for application developers.
To the extent there's a barrier to entry, it seems largely one that can be solved with decent tutorials pointing to the simple parts that you're expected to use in your applications (and a culture of not overcomplicating things in application code).
That's funny given many of the changes were made to make C# look more like JavaScript!
C# 6 introduced expression-bodied members for simplified syntax (like JavaScript), null-conditional operators, and string interpolation. C# 7 brought pattern matching, tuples, deconstruction, and local functions. C# 8 introduced nullable reference types for better null safety, async streams, and a more concise switch expression syntax. C# 9 to C# 12 added records, init-only properties, with expressions, and raw string literals, global using directives, top-level statements, list patterns, and primary constructors.
In C#, if you need a string list you can do:
List<string> items = []; // Not as concise as JS but type safe.
As for TypeScript, nobody is supposed to use most of it -- unless you're authoring a library. You benefit from it's features because somebody else is using them.Languages draw inspiration from each other -- taking the good parts and incorporating them in. C# is a vastly better, easier, and safer language than it used to be and so is JavaScript.
Stupid easy to learn, have some loops, have some conditions, make some memory allocations. You will learn about the fundamentals of computing as well, which you might as well ignore (unknowingly) if you start with something like JavaScript (where is this data living in my computer?).
C as a first language is only easy, if you happen to bring along a deep technical interest (and pre knowledge) about the "technical fundamentals of computing".
Most people do not have that.
Tell them about heap and memory allocations and you will get a blank stare.
But show them some simple functions, to make some flashing graphics on the sceen - and they will have fun. And can learn the basics of programming at the same time.
And then you can advance more low level, for those who feel the call. But please don't start with it, unless you have some geeky hacker kids in front of you who really want to learn computers. Then C makes sense. For "normal" people not so much.
But there is an implicit context here around those who want to program alongside the professionals. That comes with wanting some deeper understanding of the machine.
But do stay away from the concurrency. I occasionally get flack on that point, but try to remember back to your programming days when you were having enough trouble keeping track of how one instruction pointer was flowing; it doesn't help to immediately try to keep track of multiple. Gotta recover the novice mindset for a moment when recommending programming langauges.
I used to recommend Python, as many others did. Your cited disadvantages of such languages are certainly true, but Python used to make up for it with the ability to do real work relatively quickly, and while it may not have taught you how the machine worked, it did a good job of teaching programming. But now... well... Python was my primary hobby language for about 8 years around 2000-2008. I'm fluent in Python. I wrote metaclasses. I wrote a bit of a C module. And I can still read it, because I do check in from time to time. But it's not the same language anymore, and almost every change it has made has made it harder to recommend as a new language. It used to be the simple alternative to Perl that still had most of the power... now I think it's harder to read than a lot of Perl 5, what with all the constructs and the rules about what happens and the difficulty of resolving what a given line is going to do with all the ways of overloading and decorating and overriding everything. And the culture of having all this power, but using it selectively, is gone from what I can see; now it's "we have all this power and what a shame it would be not to use it".
"Oh, you want to build an app that does X? Well, first learn C for three months and then switch to Python/Javascript/etc. to build the thing that motivated you in the first place" doesn't fly.
Right. Becuase no one ever learned C as a first language ever, and those that paradoxically did were worse programmers for it!
That's both a false dichotomy and irrelevant as well.
My message is "There are multiple excellent (even legendary) developers in the short history of our field that learned programming in C. There are many more who primarily used C".
This refutes your point completely.
Everybody who does Express, React, or any other popular advanced libraries with TypeScript is using these features. Some things are simply more useful to libraries than line of business code - that's fine. The line of business code is much better thanks to it.
This is very true and my original post was short sighted. You could, of course, make most upstream dependencies without modern language features. However, their complex jobs get much easier with these features.
Downstream, business logic is much easier to implement without these features compared to complex, low level functionality.
But if I’m writing a module that a lot of other consumers in the codebase will use, and I want to make their lives easy, I might use a lot of advanced TS features to make sure than type safety & inference works perfectly within the module. Whoever consumes it can then rely on that safety, but also the convenience. The module could have some convoluted types just to provide really clean and correct auto-complete in a certain method. But most people don’t need to worry about how that works
I think your model of how people use modules is flawed.
I doubt most people using those modules are using typescript to mostly interact with them, because of the perceived subjective benefit you see of typing everything.
For example, I use many typescript-written modules without using typescript in the code that uses them, and am better off for it. Because I and my R&D work does not want the advanced features of typescript. We can switch to it, or a OOP server language if that is useful later.
Exposing types to me usefully in libraries to use "with Typescript" as you claim means my own code has to be typescript. In that case, to avoid compile errors and a wall of "any" types, I reasonably have to switch my own code to use Typescript classes etc, even where this is just bloat etc. Another reason I have libraries is to do things without ever interacting with them other than input props (e.g. a drag'n'drop library with JSX components). In that case, the type (JSX Component) is irrelevant to me to include, and for experienced developers, approximately 0% are going to give something other than a JSX component as an input to a drag'n'drop library, etc.
In other words - I derive benefit from them using Typescript without having to use it myself. Pushing Typescript as "necessary" because popular libraries have interfaces is exactly the kind of thing that slows down R&D and fast processes.
I have used many languages with types for many years. I understand their value. However, much of the value is code coherence, working with other people, and domain models being embedded in the code. These benefits are not always useful in small web applications.
Typing is one of those things... you love it to make your life learning code easier and for big projects, and for certainty when you are coding boring things. For other things in life, there's more to life than writing type definitions and overloading methods. You can be much more productive just using primitives in some scenarios and make research discoveries faster and with more flexibility.
What I have seen is every generation of coders, a new type-heavy language/framework becomes popular (.NET, Java, Typescript), then it becomes "uncool" because people realize how bulky and useless most of it is - especially for anything small/research-y, then it loses adoption and is replaced by another.
I'll put on my Scheme hat and say "with hygienic macros, people can add whichever language features they want." Maybe Rust is a good experiment along those lines: C++ with hygienic macros.
Everything that people keep using grows into a monster of complexity: programming languages, software, operating systems, law. You must maintain backward compatibility, and the urge to add a new feature is too great. There's a cost with moving to the new thing -- let's just put the new thing in the old thing.
I've been learning steadily for 8 or so months now and at no point have I felt the language was unapproachable due to excessive features.
Looking back on what each new version added, I don't think any of the additions were damaging to the simplicity of C#.
I do likely have a biased perspective though, as I use newer C# features every day.
I think that is kind of the point, though. Many of those newer features help with simplifying code and making it less boilerplate-y. To old programmers it is a simple code fix in the IDE to move from 30 lines of variable assignments in a switch to a 5 lines switch expression and they can learn that way. People new to the language typically won't even consider going the complicated route because they learned an easier way first.
I do concede that having people with less C# experience on a team where modern C# is used, there will be constructs that are not immediately obvious. SharpLab has an “Explain” mode which would be helpful in such cases, but I haven't seen anything like that in IDEs: https://sharplab.io/#v2:C4LgpgHgDgNghgSwHYBoAmIDUAfAAgBgAJcB...
However, as a personal anecdote, we've had a number of developers who have written mostly Java 1.4 (technical reasons) before switching to C# about a year ago. They took up the newer features and syntax almost without problems. Most questions I got from them were along the lines of “Can we also use this feature?” and not “What does this do?”.
Google "typescript interfaces." #1 is a page that has been deprecated for years. How did this happen?
It is hard for me to recommend using Go internally since .NET/Java are just as performant and have such a mature ecosystem, but I crave simplicity in the core libraries.
Here's the link for anyone considering learning Go: https://quii.gitbook.io/learn-go-with-tests
In terms of GC, Go has specialized design that makes tradeoffs to allow consistent latency and low memory usage. However, this comes with very low sustained allocation and garbage collection throughput, and Go the language itself does not make it necessarily obvious where allocations happen, so, as sibling discussions here and under Go iterators submission indicate, this results in the amount of effort to try to get rid of all allocations in a hot path that is unthinkable in C#, which makes it much more straightforward, and is also able to cope with high allocation throughput with ease, much like Java.
It is indeed true that Java makes different design choices when tuning its GC implementations, but you might see much closer to Go-like memory usage from .NET's back-end services now that DATAS is enabled by default, without the tradeoffs Go comes with.
Another good article for comparing GC between Go and C# https://medium.com/servicetitan-engineering/go-vs-c-part-2-g...
One of the major factors that play in Go's favour is the right attitude to architecting the libraries - the zero-copy slicing is much more at the forefront in Go than in .NET (technically incorrect but not in terms of how the average impl. looks like), and the flexible nature of C# combined with it being seen as "be glad we even support this Microsoft's Java" by many vendors lead to poor quality vendor libraries. This results in the experience where developers see Go applications be more efficient, not realizing that it's the massively worse quality implementation of a dependency their .NET solution has to deal with (there was a recent comparison video, where .NET was estimated to be slower, but the reality was that it wasn't .NET but the AWS SDK dependency and the benchmark author being most familiar with Go and making optimal choices with significant impact there like using DB connection pooling).
I'm often impressed by how much punishment GC and compiler can take, continuing to provide competitive performance despite massive amounts of data reallocations and abstraction bloat thrown at it by developers who don't want to even consider to approach C# in an idiomatic C# way (at the very least by listening to IDE suggestions and warnings). In some areas, I even recommend to look at community libraries first which are likely to provide far superior experience if documentation and brief code audit indicate that its authors care(tm) which is one of the most important metrics.
Depends on the implementation. gc doesn't put a whole lot of effort into optimization, but it isn't the only implementation. In fact, the Go project insists that there must be more than one implementation as part of its mandate.
Until this changes, the "Depends on the implementation" statement is not going to be true in the context of better performance.
https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
Then. You get forced into using intelij because it seems to smooth over a lot of the toolings problems with "magic".
It's horrible.
I appreciate that this is mostly just a generic rant, but it's not really suitable here, because this is a feature which is being added with the sole goal of improved performance.
There's only so much you can to optimize the extremely dynamic regular objects in JS, and there's no hope of using them for shared-memory multithreading. The purpose of this proposal is to have a less dynamic kind of object which can be made more performant and which can be made suitable for shared-memory multithreading.
Are you really confused by file scoped namespaces or target-typed new or even null coalesce assignments?
You don't have to use them -- although Visual Studio will helpfully suggest places you can use them.
If I had never seen a pattern match switch statement before (and there was a point where I didn't) it's sort of immediately obvious what it does.
(int x, string y) = (default, default);
The let keyword didn't exist in JS when Node was first released, nor did for/of, which while unstated in your post, is probably what you are thinking of when you posted this. The language has not stayed the same, at all.
The funny thing is if you used F# over a decade ago almost all the C# improvements seem familiar. They were lifted from F#, some of them badly.
And I know F# borrows a lot from OCaml. But it's hard to fathom why we need to use the badly adopted F# features in C# instead of just getting F# as a main Microsoft adopted language.
This is a culture issue and has always existed in C#, Java and C++ communities sadly (and I'm seeing this now with TS just as much, some Go examples are not beacons of readability either, I assume other languages suffer from this similarly).
In the past, people abused BinaryFormatter, XML-based DSLs, occasionally dynamic, Java-style factories of factories of factories, abuse of AOP, etc. Nowadays, this is supplanted by completely misplaced use of DDD, Mediatr, occasional AutoMapper use (oh god, at least use Mapperly or Mapster) and continuous spam of 3 projects and 57 file-sized back-ends for something that can be written under ~300 LOC split into two files using minimal API, records and pattern matching (with EF Core even!).
Neither is an example of good code, and the slow but steady realization that simplicity is the key makes me hopeful, but the slow pace of this, and new ways to make the job of a developer and a computer more difficult that are sometimes introduced by community and libraries surrounding .NET by MS themselves sour the impression.
You don't have to use every feature of the language. Especially not when you are just learning.
> Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!
Exactly. But no-one seems to be arguing that typescript has a huge barrier to entry.
Geez I'd sure hope not.
If you liked C++11, you can use C++11. Every compiler, platform, and library will support it.
No one erased it and made you go back to C99.
The people who are in a position to decide what features get added to a language are usually top experts and are unlikely to have any reasonable perspective on how complicated is too complicated for the rest of us.
If you live and breathe a language, just one more feature can seem like a small deal.
I think it becomes much more reasonable when that one more feature enables an entire set of capabilities and isn’t just something a library or an existing feature could cover.
"There are only two kinds of languages: the ones people complain about and the ones nobody uses."
… and the people working on these projects need to deliver, else their performance review won’t be good, and their financial rewards (merit increase, bonus, refresher) will be low. And here we are.
Edit: I realize I’m repeating what you said too, but I wanted to make it more clear what’s going on.
At least we moved past webpack mostly.
Obviously then can't make TS more performant (since it doesn't execute) but C# is very performant and even surpasses Go in the TechEmpower benchmarks.
One of the best things .NET did was adding minimal APIs in .NET 6 (I think) that are more like Express. They removed a lot of boilerplate and unnecessary stuff, making it easier to start building an API.
We already use regular JS for some of our internal libraries, because keeping up with how TS transpires things into JS is just too annoying. Don’t get me wrong, it gets it right 98% of the time, but because it’s not every time we have to check. The disadvantage is that we actually need/want some form of types. We get them via JSDoc which can frankly do almost everything Typescript does for us, but with much poorer IDE support (for the most part). Also more cumbersome than simply having something like structs.
Programming languages are like any other software product, evolution or stagnation.
Eventually they might implode, however whatever comes after will follow the same cycle yet again.
𝅘𝅥𝅮𝅘𝅥𝅮𝅘𝅥𝅮𝅘𝅥𝅮
They've got decorators, record tuples, shadow realms, and rich rekeying
Dynamic imports, lazy modules, async contexts now displaying
JSON parsing, destructure privates, string dedenters, map emplacers
Symbols pointing, pipe operators, range iterators, code enhancers
Eager asyncs, resource tracking, strict type checks, and error mapping
Phase imports, struct layouts, buffering specs for data stacking
Temporal zones, buffer edges, chunking calls for nested fragments
Explicit locks, throw expressions, float16s for rounding segments
Base64 for typed arrays, joint collections, parsing pathways
Atomic pauses, void discarding, module scopes for seamless relays
Math precision, tuple locking, module imports, code unlocking
Source phase parses, regex bounds, iterators kept from blocking
Iterating, winding modules, atomic gates with locks unbound
Helper methods, contexts binding, async helpers, code aligning
Soffit panels, circuit brakers, vacuum cleaners, coffee makers
Calculators, generators, matching salt and pepper shakers
I can't wait, (no I) I can't wait (oh when)
When are they gonna open the door?
I'm goin' (yes I'm) goin', I'm a-goin' to the
ECMAScript Store
Proxy traps and symbol iterators, BigInts for calculations greater Nullish merging, optional chaining, code that's always up-to-date-ing
Temporal parsing, binary shifting, WeakRefs for memory lifting Intl APIs for global fitting, Promise.any for fastest hitting
Private fields and static blocks, top-level awaits unblock the clocks Logical assignments, numeric seps, each update brings new shocks
Array flattening, object spreading, RegExp lookbehinds not dreading Class fields, global this, and more, the features keep on threading
I can't wait, (no I) I can't wait (oh when) When will they add just one feature more? I'm coding (yes I'm) coding, I'm a-coding with the ECMAScript lore
It could also include babelrc and eslintrc generator as proposed in another comment below.
For example, if you're protecting the internal state of some data structure with a mutex, the mutex lock and unlock operations are what ensures ordering and visibility of your memory writes. In the critical section, you don't need to do atomic, sequentially consistent accesses. Doing so has no additional safety and only introduces performance overhead, which can be significant on certain architectures.
The main reason it is there today is to satisfy some delegates' requirement that we build in guardrails so as to naturally discourage authors from creating thread-unsafe public APIs and libraries by default. We're exploring other ideas to try to satisfy that requirement without unsafe blocks.
The word "unsafe" will be picked up as meaning "can infect your computer" which we can already see examples of these messages.
Granted, a JS runtime is significantly more complex than a WASM runtime so there is more room for error.
I guess it depends on how you get to said crash, but no, data races on Wasm shared memory cannot "crash" anything. At worst racy reads/writes can produce garbage (primitive) values and put garbage bits into memory locations involved in the accesses. Putting garbage bits into a Wasm memory could lead to a program's logic having bugs (e.g. it could then try to access out of bounds or trap for another reason), but the accesses themselves can't crash anything.
It's unsafe as in, if you don't follow the rules, the resulting value is ~rand().
For those familiar with C/C++ terminology, this is the tame "unspecified behavior" (not the nasal demon "undefined behavior.")
Having said that, an "unspecified result" can still come from anywhere, like a value left in a register from some previous computation or other "garbage" on the stack or heap. This still can be a security issue, even though the behavior is not completely undefined.
The rest is correct.
Talking about JS proposals, I'm looking forward to this one: https://github.com/tc39/proposal-record-tuple
Records and tuples can make a lot of logic much more easier to read, and way less fragile. Not sure how they would play together with the shared structs though.
The reluctance of the authors is due to backward compatibility with sandboxed "secure" JavaScript (SES). That said, every other language in existence that has immutable structs and records allows to put arbitrary values in them.
So it's at a standstill, unfortunately.
I thought that the whole point is to have guaranteed deep immutability, which you can't have if it's got arbitrary objects in it.
The behaviour of equality. Frozen objects are already considered to have unique identities, in that `Object.freeze({}) !== Object.freeze({})` even though both objects are otherwise indistinguishable. This behaviour can't be changed and it relates to the fact that `Object.freeze(a) === a`.
> I thought that the whole point is to have guaranteed deep immutability
Not really. The whole point apparently according to most people[0] is to have composite values that don't have unique identities, so they fit in with all the existing comparison operations (eg, `===`, `Map`, `indexOf`, `includes`) just as you can do with strings.
Immutability is a prerequisite for this, since if `a` and `b` are mutable, mutating `a` might be different to mutating `b`. Thinking again about strings, equality works because strings are immutable:
const foo = "foo", bar = "bar";
const a = foo + bar;
const b = foo + bar;
a === b; // true
Implementations will typically use different underlying memory allocations for these strings[1], but at a language level they are considered to be the same value. If it were possible to modify one of the strings (but not the other) using `a[0] = "x";` it would mean `a` and `b` are not equivalent so should not be considered equal.As explained here[2], deep immutability is not necessary for this behaviour.
In my opinion guaranteed "deep immutability" is not generally useful/meaningful (if you have a particular use case, feel free to share it). In theory it's not possible to enforce "deep immutability" because someone can always refer to something mutable, whether that's an object reference or a number indexing a mutable array.
If you really do want something that guarantees a certain notion of "deep immutability", this concept seems somewhat orthogonal to records/tuples, since there are existing values (eg, strings and numbers) that should be considered deeply immutable, so you'd expect to have a separate predicate[3][4] for detecting this, which would be able to effectively search a given value for object references.
In case you're interested I tried to summarise the logic behind the rejection of this behaviour[5] (which I disagree with), but it's very much a TLDR so further reading of linked issues would be required to understand the points made. Interestingly, this post is on an issue raised by the odd person that actually tried to use the feature and naturally ran into this restriction.
Sorry for this massive wall of text, but I think it's hard to capture the various trains of thought concisely.
[0] https://github.com/tc39/proposal-record-tuple/issues/387#iss...
[1] https://github.com/tc39/proposal-record-tuple/issues/292#iss...
[2] https://github.com/tc39/proposal-record-tuple/issues/292#iss...
[3] https://github.com/tc39/proposal-record-tuple/issues/292#iss...
[4] https://github.com/tc39/proposal-record-tuple/issues/206 (I believe sjrd (GP) earlier independently came up with the same function name and behaviour somewhere in this thread, but GitHub seems to be failing to load it)
[5] https://github.com/tc39/proposal-record-tuple/issues/390#iss...
Otherwise, there's the argument that "x.y" syntax shan't be used to access a mutable object from an immutable record, but that just feels like the all-too-common motive of "we must ensure that users write morally-correct code (given our weird idiosyncratic idea of moral correctness), or otherwise make them pay the price for their sins".
I haven't really been following the Shadow Realm proposal (I'm not part of TC39, so only familiar with certain proposals), but I don't think it should conflict with R/T.
If R/T values are allowed to be passed between realms, they should effectively be "transformed" such that eg, `f(#[v])` is equivalent to `f(#[f(v)])` (where `f` is the transformation that allows values to be passed between realms). For "deeply immutable" values (no object references), `f(v)` will simply return `v` (eg, `#[42]`, f(#[42])` and `f(#[f(42)])` are all the same) and a membrane should be able to trivially optimise this case.
From this comment[0] it sounds like `f({})` in the current Shadow Realm proposal will throw an error, so I'd expect that `f(#[{}])` would also throw an error.
As you were pointing out, I think the only real contention between R/T and realms is in existing JS implementations of membranes, particularly because they might use the following condition to detect if something is "deeply immutable":
v === null || typeof v !== "object" && typeof v !== "function"
If `typeof #[{}] === "tuple"`, then their `f` function will pass that value through without handling the contained object value by throwing or by creating/finding a proxy.If `typeof #[{}] === "object"`, it should be fine because `f(#[{}])` will either throw or create/find a proxy for the tuple. There might be some unexpected behaviour around equality of R/T values passed through the membrane, but this is pretty obscure and it should be fixed once the membrane library is updated to handle R/T values.
Personally, I'm still not 100% convinced that the assumptions made from the above condition are important enough to cause such a change to the proposal, but I don't see the value of `typeof #[]` as being a usability issue. Code that needs to check the types of things is a bit smelly to me, but in cases where you do need to check the type, `typeof v === "tuple"` and `Tuple.isTuple(v)` both seem usable to me, so just making `typeof #[] === "object"` should be fine and it solves this hypothetical issue. This is similar to array objects, which are also fundamentally special (`Object.create(Array.prototype)` is not an array object) and are detected using `Array.isArray(v)`.
> Otherwise, there's the argument that "x.y" syntax shan't be used to access a mutable object from an immutable record, but that just feels like the all-too-common motive of "we must ensure that users write morally-correct code (given our weird idiosyncratic idea of moral correctness), or otherwise make them pay the price for their sins".
Agreed, and I've pointed out[1] that even the current proposal doesn't address this, since unless you've done some defensive check on `x`, there's nothing stopping someone passing a mutable object for `x` instead of a record. If you do want to perform a dynamic[2] defensive check, perhaps you should be asking "is it deeply immutable?" or even checking its shape rather than "is it a record?".
[0] https://github.com/tc39/proposal-record-tuple/issues/390#iss...
[1] https://github.com/tc39/proposal-record-tuple/issues/292#iss...
[2] If you're using a type system like TypeScript, this check should happen statically, because you'll use a type that specifies that it's both a record and the types of the properties within it, so your type will encode whether or not it contains mutable objects
With the rise of WASM part of me feels like we shouldn't even try to make JS better at multithreading and just use other languages better suited to the purpose. But then I'm a pessimist.
Otherwise, that's all this seems like to me, a class where all instances are automatically frozen. Which is a great semantic, but they expose way too much of the internals, in this proposal, to achieve that.
Modern development is so goofy.
Beginner: just clone everything
Intermediate: work out every intricacy that allows us to use multiple lifetimes
Expert: just clone everything
This proposal feels like it's in the middle.
I think TS is a negative influence on JS, because now instead of saying "maybe we should fix the JS type system" they just say "no need to fix what's broken, people who care will just use TS anyway" (even though TS can only do so much).
On the other hand, TS mainstreamed the idea of typed JS (well, ActionScript did that decades ago, but somehow no one noticed or cared?), so it's also a positive influence?
Most people are drawn to WASM because "I can do frontend stuff without writing JS!" but for the most part that's not true, and in my experience the problems introduced by the indirection and interop, and the complexification of the mental model, and the bloating of the build system (and its fragility), were not worth it and I just switched back to TS.
So I do really wish that JS would be improved -- it remains inescapable -- especially with regard to fixing fundamental design flaws rather than just adding more shiny stuff on top.
In my experience, the positive of JavaScript over other languages I have used- COBOL, Fortran, assembly, C, C++, Java - is the fine balance it has between expressibility and effectiveness.
I am not opposed to shared memory multi-threading, but question the cost/benefit ratio of this proposal. As many comments suggest, maintaining expressibility is a high priority and there are plenty of gotchas in JavaScript already.
As an example, I find the use of an upfront term like "async" to work quite well. If I see that term I can easily switch hats and look at code differently. Perhaps we could look at other mechanisms, using the term "shm", over a new type, but what do I know?
[edit for clarity since I think faster than I can type]
Class is entirely unnecessary and, essentially, tries to turn JS into a class-oriented language from its core which is object-oriented.
I never create classes. I always create factory functions which, when appropriate, can accept other objects for composition.
And I don't use prototypes, because they are unnecessary as well. Thus sparing me the inconvenience, and potential issues, of using 'this'.
In my dreams those who want to turn JS into c# or Java should just create a language they like and stop piling on to JS.
But, at least so far, the core of JS has not been ruined.
That said, there are some new features I like. Promises/async/await, Map, Set, enhancements to Array being among them. But to my way of thinking they do not change the nature of the language in any way.
In my dreams those who want to turn JS into c# or Java should just create a language they like and stop piling on to JS.
We could even share this dream if browser vendors weren’t such whos the boss iam da boss when it comes to extensions and alternatives. So we have to live in a common denominator, which surprisingly isn’t as bad as it could be, really.
I'm guessing JS wasn't your first language
Good intuition. My first language was basic, 8080 asm, x86 asm, pascal, C, perl, python, haskell (most useless), lua, objc. Js/ts is only a recent addition, so I might have missed some fashion ideas.
Tongue in cheek aside, if you’re an old dev, there’s nothing you have to listen to because you can see whether you have a problem yourself and decide for yourself. You can be your own advisor. I see both “classes” and “just functions” ways clearly and can convert my current codebases in my mind between these two. Nothing really changes for the latter, apart from bulky import sections, lots of * as ident imports, context-arg passing and few dispatch points. Objects (non-strictly related groups of data and methods) still exist and hold refs to event/callback emitters. So my reasoning isn’t why, my reasoning is why not. I have a tool, I have a business logic, pen pineapple apple pen. Don’t overthink it is my main principle.
Do I need to introduce composition? Do I have it already? How is it better than what I’m doing? Is it? What am I missing? What are they missing? What if they don’t? What if we speak of different things? These are the questions of a restless butt that cannot find rest on any stool. Instead it should ask: Do I have a problem?
I started with 6800 machine language. Then c, smalltalk, scheme and etc.
Rather than spend a lot of time and botch a comparison between classes and factory functions I'll link you to an article.
He went further, introducing something he calls stamps, but I found them to be awkward the only time I tried to use them.
https://medium.com/javascript-scene/javascript-factory-funct...
That said, for me it’s hard to buy into his arguments, in a sense that it doesn’t matter that much, if at all. instanceof doesn’t work for different realms and is nuanced for direct prototyping and Object.create(), but I never use or care about these in my code, by design. There’s no way that such value could appear in a false-negative instanceof comparison, so. A similar thing happens in COM/OLE integrated runtimes, where you have to be careful with what quacks like a date or a string but is neither due to a wrapper. But that’s expected.
I believe the real issue here is that iframes/etc usually get served as “some values aren’t, so use X, be careful” rather than “guys it’s an effing wrapper to an effing different runtime, which we found to be an overall anti-pattern many years ago”. Browsers and webguys normalized it, well, they normalized many crazy stuff. Not my problem. There’s no need to learn to balance on two chairs when it’s not what you do when sober. I still use Array.isArray(), but only because every linter out there annoys you to hell into it.
Tldr: classes are neat you can pry them from my cold dead hands.
The only thing to care about with classes is to not fall into the inheritance trap, and not for the reasons of instanceof. Inheritance is a tree of ladders attached with a duct tape, you have to know what you’re trying to do to your design before thinking about it. Most sane use of inheritance is one-off from a library to a user (two separate developing agents agree on an implied behavior, “I implemented it for you to randomly extend and pass back” mode), or for helping type inference. Otherwise, a way to go is to eject a common behavior into a separate class or a couple of functions (aka composition).
I really like the flexibility of factory functions, that is the main point of the article imo.
COM/OLE ... that takes me back to the early 90s, a place I hoped to never visit again!
:)
Looking through the source of Replicache, here are some classes we use:
- KVStore
- DAGStore
- Transaction
I mean ... I can of course model these w/o classes, but encapsulating the state and methods together feels right to me. Especially when there is private state that only the methods should manipulate.
We use composition all over the place and rarely use inheritance so I don't think it's just some deficiency of knowledge .
Pre JS classes, the js community emulated classes w/ the prototype chain and that's what I'd have done for these classes if real JS classes weren't available.
emulating classes is, imo, exactly the problem
using factory functions which create and return an object, with variables passed to and created in the function, handles encapsulation
and there is no `this` to deal with
So no, javascript didn’t really “add classes”. It just had a very annoying lower-level syntax for them from the beginning and fixed it after a while. It wouldn’t survive the pressure if it had no classes at all cause this idea is fundamental to programming and to how we think: you-do.
One may pretend to not have classes through closures, but technically that’s just classes again, cause you have a bunch of functions with a shared upvalue block. You just hold it the other way round lexically, by a method instead of a context.
I believe this common idea of alienating classes stems from the general OOP stigma since the times of “design patterns”.
For example,
1. field declarations [1] make sure that the fields are always initialized in the same order. That way most of your functions end up monomorphic, instead of being polymorphic [2]
2. Method declarations are also (almost) free, since you only pay for them once, during class initialization.
You also get a few other niceties such as private properties. You can emulate private properties with closures in factory functions but V8 has a hard time optimizing, unfortunately.
---
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
makeThing(options, usethistoo) { let foo = options.foo; let thistoo = usethistoo;
return {
functions...
}
}
You can use d8 to check what the class structure ends up looking like [2]
---
Eh, prototypes share, instead of create, method references. I guess you can use delegate objects too though unless you're just doing pure functions.
Sometimes programmers spend way too much time optimizing code which doesn't really need it.
In my experience how data is structured is almost always the most important factor when it comes to performance.
Good data structure + simple code === performance.
I suppose if you want a defined/packed memory layout you can already use SharedArrayBuffer and if you want to store objects in it you can use this BufferBackedObjects library they linked. https://github.com/GoogleChromeLabs/buffer-backed-object
I also expect that in browsers this will have the same cross-origin isolation requirements as SharedArrayBuffer that make it difficult to use.
What kind of types did you have in mind? Machine integers and "any" (i.e., a JS primitive or object)?
And yes, in browsers this will be gated by cross-origin isolation.
I feel like trying to add fast data structures into JavaScript is futile, I think at this point it would be better to make it easier for JavaScript and the browser to interface with faster languages.
The only thing I would add to JavaScript at this point is first class TypeScript support so that we can ditch the transpilers.
// Step 2: Convert the string to binary data const encoder = new TextEncoder(); const encodedJson = encoder.encode(jsonString);
// Step 3: Create a SharedArrayBuffer and a Uint8Array view const sharedArrayBuffer = new SharedArrayBuffer(encodedJson.length); const sharedArray = new Uint8Array(sharedArrayBuffer);
// Step 4: Store the encoded data in the SharedArrayBuffer sharedArray.set(encodedJson);
Now you can use Atomics, no?
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
The bad is that people wouldn't necessarily be prepared for their semantics (are they value or reference based?), how to shared prototypes between environments (mentioned as problem in the proposal itself), not entirely sure if this proposal would add to the complexity vs security for spectre like attacks.
It'd be useful, but worth it is another question? (And would all major players see interest in it? esp considering that it'd need to be "JSzero" level propsal if they go in that direction. (There was a post here a few days ago about layering runtimes with JS0 being the core with everything else being syntax transforms on top).
[1] just got some bad news [2] all in all, I love working in JS when I have to, but I’ve worked in it long enough to know of at least very many of the foot guns.
Today I went for a walk (which I don't usually do), and I saw a squirrel.
Or have I been doing it wrong?In e.g. Elixir these are non-issues. Please, just give us declarative structs that are immutable by default (if they’re really needed, make constructors and mutability opt-in). Isn’t the trend already toward more FP in JS?
Frontend is an obvious one but also using services like CF Workers or Deno Deploy which are optimized for V8. You're going to get better uptime and lower latency than anything else at that cost.
do you mean like reactive data?
You can use other languages that compile to WebAssembly, but it's borderline as it's basically just a VM / self-contained executable that you can pipe to. It's completely isolated from the browser.
"but wasm has to call JavaScript to use browser APIs" WasmGC is shipped in Chrome and Firefox and enabled by default in WebKit nightly
I personally care a lot more about having a confidence-inspiring language and ecosystem.
In my experience with Rust and WASM (with various tools such as Dioxus), I find myself caring a lot more about the WASM ecosystem and browser evolution/improvement.
For example, at bottom, the JS interop feels pretty sub-optimal. Calling this "hcky" might even be deserved: I'm talking about memory serialization between JS-land and WASM-land. As I understand it, we may see significant improvement under the hood in the next few years. (I'm not an expert on the particular proposals, their adoption, etc. Please weigh in if you have a better sense.)
I'm all for having a confidence-inspiring language and ecosystem, don't get me wrong, but it's kind of a non-starter if I can't build at the same pace in Rust as I can in typical web technologies.
I'd improve this proposal in two ways:
1. Explicitly define the layout with types. It's new syntax already, you can be spicy here.
2. Define a way for structs to be directly read into and out of ArrayBuffers. Fixed layout memory and serialization go hand in hand. Obviously a lot of unanswered questions here but that's the point of the process.
The unsafe block stuff, frankly, seems like it should be part of a separate proposal.
You don't strictly need known/consistent types, but it sure helps, since otherwise everything needs to be 8 bytes.
I don't think a way to read into and out of ArrayBuffers is possible, since these can have pointers in them. I think it needs a StructArray class instead, so there's a way to actually make a compact memory array out of all of this.
Arguably that's worse than what the runtime is able to do today already with hidden classes.
> I don't think a way to read into and out of ArrayBuffers is possible
If you know all the types and only allow structs and primitives, you could use relative pointers to encode the 2nd+ references to structs that appear more than once in the encoded object. You'd need a StructArray for efficient arrays, but a linked list would encode pretty compactly. But you're very right.
When applying ReactJS in webdev after doing all kinds of engineering in all kinds of (mostly typed) languages in many runtimes, I was so surprised that JS did not actually had a struct/record as seen in C/Pascal. Everything is a prototype that pretends its an object, but without types and pointers, and abstraction layers that added complexity to gain backwards compatibility.
Not even some object hack that many OO and compiled languages had. ES did not add it either, and my hopes where in WebAsm.
This proposal however seems like the actual plan that i’d like to use a lot.
A lot of the code complexity was to get simple guarantees for data quality. The alternative was to not care, either a feature or caveat of the used prototype model.
This way I'd assume eg. decorators would be usable on struct fields and methods, but engines would be safe to cache prototype method lookup result values without any validity cell mechanics. I would assume this could make prototype method calls on structs very fast indeed.
By the way, doesn't V8's optimizer already do something like this internally? I read one of their tech blogs back in the day that explained how they analyze the structure of objects and whenever possible, compile it to the equivalent of a C++ class.
I guess doing it explicitly makes the optimizer's job much easier -- the more guarantees you give it about what won't happen, the more optimizations it's free to make.
If you are fairly senior or aiming for some sort of promotion this is the sort of thing that looks great on your resume.
I doubt that it is driven by a desire to help consuming devs build better quality products more quickly or easily.
1) Struts are encouraging a coding style that restricts what you can do. This inflexibility is then negated by adding unsafe blocks?
2) Struts don't, as far as I can see, address any of the _actual_ weaknesses of js classes- such as not being able to create aysnc constructors.
3) The cited performance benefits seem a bit strange. JS has no access to pointers or memory by design, so I don't understand why struts will automatically make things faster. Surely it makes more sense to refine the v8 engine, or even focus on WASM rather than adding syntactic sugar to vanilla js.
That said- props to people who care enough to write a proposal- and if I am missing the point of struts, sorry for the negativity.
I'd rather see binary struct views added to typed arrays. Ideally with a settable offset so you don't have to create a new view for every instance. That seems more useful than this middle ground that can already be poly-filled. I guess binary structs can also be poly-filled but it feels like a far more obvious speed win. Marshalling data in/out of WASM, in out of WebGPU/WebGL, parsing binary files, and sharing data across shared memory all get solved at once and with speed.
2. Indeed, structs are rather an entirely different track to classes. Only the syntax is borrowed from them.
3. There's a bunch of stuff that the engine will do for you to try make your code faster. The most important thing (arguably) is inline caching: When you access `foo.bar` inside a function, your engine will remember the "shape" of the `foo` object (if it is an object, that is) and where the property `bar` was found inside of it. Unfortunately, objects tend to be pretty fluid things, so the shape of an object changes. This creates a "transition" graph of shapes, and it's pretty hairy stuff. It's also a source of memory safety bugs in browsers, as browsers want to avoid re-checking the shape of an object if it cannot have changed but this is mostly a manual optimisation, and eg. Proxies really make it so nearly everything can change an object's shape. A misapplied shape caching optimisation is easy to turn into an arbitrary read/write primitive, which is then a great way to escape the sandbox.
Imagine then that an object type existed that could be primitively guaranteed to never change it's shape? Oh, the engine would loooove that. No worries about memory safety mistakes, just cache the shape when you first see it and off to the races you go!
This applies doubly to any prototypes (which here are proposed to be only sealed; I'd personally want to see them frozen so that not only the shape can be cached but also the value): An object's shape may stay the same but the prototype may change with key deletions and additions. This means that looking up that function to call for `obj.hasOwnProperty("key")` needs to, theoretically, be redone every time. Engines of course optimise this into a fairly complex linked list of booleans, but by golly wouldn't it be easier if the engine could just statically cache that the property we're looking for is found in this particular prototype object at a particular memory offset?
Source: I lurk around in some adjacent circles, and am writing my own JavaScript engine built with potentially peculiar ideas about what makes good JavaScript.
Fortunately we can still aren't forced to use all the 'enhancements'.
JS is such a simple, dynamic language. It should just stay this way. Please stop bloating it with every feature that’s trendy this year. We already have classes that we didn’t need. We don’t need structs for sure.
>It should just stay this way
Counterpoint: JS has been evolving significantly, look at ES6 and ES8 in particular if you need help finding examples.
import * as fooNs from './foo'
fooNs.barBazQuuxFoo(foo, …)
vs foo.barBazQuux(…)
Don't use vscode or never bother to write jsdoc/do any strict typing? Never mind. Good luck with your codebase.
The nice thing about fixed layout structs is that it leans in to optimizations people are already doing based on the behavior of JS engines where 'shapes' of objects are important and properties can be looked up by offset if you keep your code monomorphic. It can be a bit of a headache to enforce this and you can accidentally fall off a performance cliff if you end up with many 'shapes' for the same thing. By making this a language feature it codifies and blesses what was essentially a hack relying on the implementation of the underlying engine that could change at any time.
There are also TypedArray's that do provide a bunch of cache friendly (but slightly unergonomic) ways to organize data.
A good resource for the sorts of things people are doing to write high-performance JS is here: https://romgrk.com/posts/optimizing-javascript
The limiting factor on a program's performance should be the design of algorithms and data structures, not the programmer's choice of language or runtime.
Not with the attitude, you don’t.
> Just leave these things to WebAssembly where needed and leave JS as a slow, dynamic language we use for web apps.
The ship has sailed when they made V8 and performance race has started.
Yes, it is surprising. https://stackoverflow.com/questions/6586670/how-does-javascr...
And when jit kicks in, it does all the usual calculate-the-offset things in generated code.
Trendy structs. Did I return to 1980? (wipes happy tear)
Give developers an alternative to classes that favors a higher performance ceiling and statically analyzability over flexbility.
Is an entirely reasonable goal. Object shape in JS tends to go through a fixed pattern of mutation immediately after construction and although that can sometimes be analysed away by the JIT there are a lot of edge cases that can make that tricky.You may not care, but I bet almost everybody who has actually worked on a JS engine does, and has good reasons for doing so.