I am not sure how to really refine this thought I have had, but I have this fear that every language eventually gets so bloated and complicated that it has a huge barrier to entry.

The ones that stand out the most to me are C# and Typescript.

Microsoft has a large team dedicated towards improving these languages constantly and instead of exclusively focusing on making them easier to use or more performant, they are constantly adding features. After all, it is their job. They are incentivized to keep making it more complex.

The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.

One of the reasons I have so much fun working in node/Javascript these days is because it is simple and not much has changed in express/node/etc for a long time. If I need an iterable that I can simply move through, I just do `let items = [];`. It is so easy and hasn't changed for so many years. I worry that we eventually come out with a dozen ways to do an array and modern code becomes much more challenging to read.

When Typescript first came out, it was great. Types in Javascript are something we've always wanted. Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!

This is probably just old man ranting, but I think there's something there. The old version I used to debate about was C vs C++. Now look at modern C++, it's crazy powerful but so jam packed that many people have just gone back to C.

Javascript is not simple AT ALL.

It has 3 ways to declare functions, multiple variations on arrow functions syntax, a weird prototyping inheritance system, objects you can create out of "new" on functions, object literals that can act an pseudo-classes, classes, decorators, for-i loop + maps + filter + for-in loop (with hasOwn) + forEach, async / await + promises and an invisible but always-on event loop, objects proxies, counter-intuitive array and mapping manipulations, lots of different ways to create said arrays and mappings, very rich destructuring, so many weirdnesses on parameter handling, multiple ways to do imports that don't work in all contexts, exports, string concatenation + string interpolation, no integer (but NaN), a "strict mode", two versions of comparison operators, a dangerous "with" keyword, undefined vs null, generators, sparse arrays, sets...

It also has complex rules for:

- scoping (plus global variables by default and hoisting)

- "this" values (and manual binding)

- type coercion (destroying commutativity!)

- semi-column automatic insertion

- "typeof" resolution

On top of that, you execute it in various different implementations and contexts: several browser engines and nodejs at least, with or without the DOM, in or out web workers, and potentially with WASM.

There are various versions of the ECMA standard that changes the features you have access to, unless you use a transpiler. But we don't even touch the ecosystem since it's about the language. There would be too much to say anyway.

There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.

I feel like a large slice of JS’s complexity comes from footguns you aren’t really supposed to use anymore; whereas with C# the complexity feels quite layered, multiparadigmatic, something-for-everyone, syntactic-sugary. But I probably know too much about JS and not enough about C#.
At least in C# you can ignore most of it and the complexity doesn't really come from numerous foot guns. You can still write Java 1.6/.NET 2 style C# code just fine, it's all there. The rest of the features can be fully ignored and they won't hurt you.

But then again the newer features they do make writing code a lot nicer, giving more compile time analysis warnings etc hopefully resulting in slightly better code. And the new features also enabled a lot of performance improvements in the runtime which is nice. .NET 2/4 wasn't all that fast, .NET 8 can be a lot faster.

You can still write ES5 and it will work in the latest JS runtimes, so I'm not sure how this is different.

Further, indeed the newer JS features do in fact give you better compile time analysis, warnings, etc, and result in slightly better code.

You can even write ES3 and it should work in all web browsers.
Well, all the people that used JS 15 years ago followed Douglas Crockford advice very much to heart.
Crockford hates TypeScript and loves og JS. He thinks the push to turn JS into c# is misguided and a waste of the original small talk-y beauty of The Good Parts - src he said as much to me at a lunch I went to where he was also attending.
Does Crockford "like" OG JavaScript? He's most famous for writing a book that shits on how much of the language should be ignored and avoided. To be fair, a lot of that was right, but it's far from comprehensive for the language we have today. Seems like despite another decade of improvements on the language The Good Parts lives on in the minds of readers as something relevant today-- while a new treatise on The Good Parts of modern JS is not present. There are definitely parts of JS today that should be discarded, like "var", but The Good Parts cannot help you with that, because when it was written you could not discard it, as there was no other option.

I've seen developers make complete messes of codebases that when using modern JS features would be mostly trivial, and they hide behind The Good Parts to justify it. And this includes suggesting that classes are somehow bad, and avoiding them in favor of POJOs and messily bound functions is preferrable despite JS not receiving a dedicated class concept until years after The Good Parts was published...

Does TS make JS non-smalltalky? Static typing which is optional.. and you still get a REPL, online compiler and the ability to dynamically inspect objects in your global object...
v1 of TS had a heavy "OO"/class-based bias to it, v3 and v4 made it viable for real-world JS code but peoples perception stayed both due to those who looked early on seeing something they didn't like or the code produced by many who loved it early on.
> the original small talk-y beauty of The Good Parts

Javascript seems much, much, much closer to Lisp than to Smalltalk. Granted, all three are very dynamic, but message passing needs to be bolted onto javascript. Meanwhile pretty much all of lisp is included "for free" (...via some of the ugliest syntax you've ever used).

Totally agree, if JS had further leaned into it's smalltalk-y-ness and ended up with dynamism similar to Ruby for example, I'd actually be really happy with it personally. True message passing and more metaprogramming features allowing you to change execution context would be fun to play around with in a forked version of JS somehow.
How long ago was this? TypeScript v1 and v2 definitely had a class-based stink to it since the typing system only handled those scenarios somewhat well.

Right around when I started using it (mid 2019) there was a bunch of V3 releases that each on it's own might've not seemed like much but they all improved small parts of the engine that made it easy to get typing on most of your code if using a functional style without adding maybe more than a few type declarations and some functions typings.

I'm all for people writing functional code with Javascript-- but when people eschew classes because of their "stink" and proceed to use all of the stateful prototypal archaic features of JS instead of classes, I have to protest. If you are using this and function binding and state extensively in your "functional" JavaScript, you are reinventing classes poorly. And classes are a part of JS itself, not something added on to JS by Typescript (in the current day).

The Crockford crowd would like us to live in a world of ES5 as if that's some kind of badge of pride, while justifying it with a warcry of "functional", while breaking the preconceptions of functional programming all throughout.

My point was rather that a functional style with plain old data was always preferable, the "stink" was that early TS versions _favored_ non-functional OO style that suited the TS type system at a time when most modern JS code already had a functional approach, so you ended up writing non-idiomatic JS because the TS type system wouldn't handle idiomatic code (this also contributed to a lingering distrust in TypeScript that persists to this day despite the upgrades to the typing system).

Personally I prefer neither prototypal or classes, 90% of the time you just want the interfaces, unions or inferred types and the few places where you actually want inheritance and/or object methods you really are just better off with a factory method that then creates a literal that is used or suits an interface.

Using JS/TS in a functional style doesn’t mean using prototypes or function binding or anything. I’m not sure how you inferred that from the comment you’re replying to…it just means using plain objects (TypeScript even, a little awkwardly, lets you express ADTs) and plain functions, perhaps with the module system to organize them.
Classes are still built around prototype inheritance, there are some differences, however they are still an easier to use api on top.
Yeah, iirc private properties (added in ES2022) are currently the only part of ES6 classes that can neither be created nor accessed using prototype-based code, to the consternation of some people when they were added. Of course, provate properties can still be readily emulated with a WeakMap in a function scope.
> I feel like a large slice of JS’s complexity comes from footguns you aren’t really supposed to use anymore

I'm not inclined to use a language that can't be fixed.

Hmm, I understand what you mean. But I think there's a difference between complexity and optionality / versatility.

For example, it has different ways to declare functions because assignment is generally consistent (and IMO easy to understand) and the "simplicity" of Javascript allows you to assign an anonymous function to a variable. However, you can also use a standard function declaration that is more classic.

But I do understand what you're saying. If anything, I think it's generally in agreement with my feelings of "Javascript doesn't need to be more complex."

> There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.

This is hilarious and probably true. I think I am the former since I've been working with it for 20+ years, but I also think there's a reason it's the go-to bootcamp language alongside Python.

I do appreciate the modern features a lot personally, and the fact they have been added without breaking the world. Interpolation, spreading, map and arrow functions are a huge usability boon. Plus I'm glad I can use classes and forget that prototypes ever existed.

But I train beginners in JS and boy do I have to keep them in check. You blink and they shoot their foot, give the pieces to a dog that bite them then give them rabbies.

  • girvo
  • ·
  • 2 months ago
  • ·
  • [ - ]
In fact, Javascript is so complex that one of the seminal books on it was specifically "The Good Parts", cutting down the scope of it to just the parts of the language that were considered decent and useful.
  • Lerc
  • ·
  • 2 months ago
  • ·
  • [ - ]
I think the distinction with JavaScript compared to other 'complex' languages is that you don't have to go beyond "The Good Parts" to achieve significant functionality, and it has become idiomatic to use the good subset.

In some respects I think if there were a well defined "Typescript, The Good Parts" I would happily migrate to that.

I do wonder if there will, one day, be a breaking fork of JavaScript that only removes things. Maybe a hypothetical "super strict" might do the job, but I suspect the degree of change might not allow "super strict" code interacting with non "super strict" easily.

BiteCode_dev has provided a pretty good summary of a lot of the issues. A lot of them have easy fixes if you are prepared to make it a breaking change.

ESM is the closest to a major version for JS. It forces strict mode, which includes many non-backwards compatible changes [0]. Most notably, it removes the `with` statement. Other examples of removals are octal literals or assignments to undeclared variables.

[0]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Yep, it's going to take time but eventually one day the vast majority of JS code in the wild will be strict mode and the non-strict stuff will be de facto deprecated and not expected to work everywhere
That's certainly true when writing code, but when you are reading code someone else wrote that flies right out the window. This is hardly a unique problem with Javascript, many languages get tagged as "write only" because there is such a large array of available features that you never know which subset someone is going to use. C++ is notorious for this for example.
> BiteCode_dev has provided a pretty good summary of a lot of the issues.

Around half of that list is things added later that were supposed to make the language easier to use.

after the success of the Good Parts book there were a few other books with the "Good Parts" in the title - like HTML and CSS the Good Parts, and Java: The Good Parts. I seem to remember looking through PHP: the Good Parts and feeling that it was just a learn PHP book, that it did not really differentiate between any core good parts of the language, the parts you should really learn and use.

I sure would like a real "Good Parts" series of books.

  • chii
  • ·
  • 2 months ago
  • ·
  • [ - ]
> reasons to believe JS is simple

it's because people are talking past each other, and that's because people are using language wrong, and are merely talking past each other. The word simple is often used to mean "easy" or "familiar".

Simple is very different from easy, and familiar things are easy but doesn't have to be simple at all.

javascript is not simple, but it is easy.

> using language wrong

There is no wrong use of language; there's just people who don't bother to communicate well in the most effective language available. In this case you could simply cohere the two viewpoints since you have insight rather than blaming one party and calling them wrong (...which is wrong).

> There is no wrong use of language

Thy can'n't sirus be. language works only farso as withbreathings we follow, Leading paths through gardens means fail, and bloom'st chaos wear not!

I don't think that's a serious barrier in this case. The above poster just would rather dismiss a statement rather than give it serious thought. It's easier to simply claim to know the one true meaning of "simple" rather than actually communicate effectively.
  • chii
  • ·
  • 2 months ago
  • ·
  • [ - ]
> rather than actually communicate effectively.

what's more effective than having a pre-defined term be what it means, rather than what the speaker intends internally?

> what's more effective than having a pre-defined term be what it means, rather than what the speaker intends internally?

Trying to understand the speaker, presumably, and not wielding your pet definition like a semantic argument. It's fundamentally boring conversation with no benefit to either party and it makes you look like an illiterate ass.

[flagged]
It's on the list of languages that used to be simple, I think.
Yes, but when "the good parts" came out, half of this list was already true.

There is a reason we ignore a good chunk of the language to be productive with it.

Not just half of it, the central part of it. Javascript did not grow into something huge, it started that way. A prototype based wannabe Java that accidentally (?) shipped with a full scheme included alongside. The latter of which remained mostly dormant until "the good parts" came along and put them into the (deserved) spotlight, relegating the prototype stuff from idiomatic to niche, for when you are doing something particularly clever. It's a unique mess that has lead to something no purer language could dream of.
It would have worked out fine if we managed to lose the prototypes without ramming classes into the language and baiting all the Java dickheads over to the web ecosystem.

Javascript breathed it's last breath the moment someone saw NestJS and said "wow that's a good idea".

> Javascript breathed it's last breath the moment someone saw NestJS and said "wow that's a good idea".

I still don’t understand how someone looked at Spring and thought “Wow, that’s pretty good! I’ll bring it to platform that has worse performance than Java, to language that was designed with dynamicity in mind and has no native static typing”.

  • dingi
  • ·
  • 2 months ago
  • ·
  • [ - ]
Asking out of curiosity. What's your rationale behind Spring is slower? Worked on couple of greenfield and existing Spring Boot applications and we never had any performance issues caused by Spring. Spring has its own bad parts but calling that it has worse performance than Java is not one of them. Its not even a valid comparison. Java is crazy fast for a VM based runtime AFAIK.
Java is faster than Node.

You take slow framework, like Spring, and put it on slower runtime (Node) so you get double slow with less benefits.

> shipped with a full scheme included alongside

Sorry, what?

https://journal.stuffwithstuff.com/2013/07/18/javascript-isn...

The "scheme inside" might be missing the mark in any number of features, but in the end the resulting effect it had have been profoundly, well, effective. It's there.

People have been going all SICP (Abelson/Sussman) on JS ever since Crockford exposed the hidden scheme (or hidden not-scheme-at-all, if you insist) and moved JS far, far away from the humble prototype OOP it started as. And that had little to do with any language extensions that had been creeping in very slowly given the lowest common denominator nature of web development, and everything with the funky scope binding tricks that generations of programmers had been taught in the memorable "let's make Scheme OOP" drills of SICP and the MIT course (that so many other universities based their teaching on)

>There is a reason we ignore a good chunk of the language to be productive with it.

The same can be said for most languages, even assembly language, and especially so for C++.

This is true, but what's also true is using biome or eslint more than half of your complaints are gone. JS has always had bad parts, but today it's a lot easier to avoid them thanks to linters. And if you do stay in the good parts, it's my favorite language, for many reasons.

That said, I hate the constant stuffing of features (though not this one which is much needed), more stuff around JS like WebComponents, or CSS adding a ton of sugar.

I think what saves JS, like Python, is that despite the modern complexity, you can start to be productive in a few days with just the essentials and learn the rest with a smooth curve, slowly as you progress.
>Javascript is not simple AT ALL.

I prefer languages with a small instruction set, as then you can learn all you can do in the language and hold it in your head. JavaScript used to have a small instruction set, I don't feel it does any longer.

Aside from this I don't know that I see any benefit to these structs, although perhaps that is just the article doing that whole trying to write JavaScript like Java thing that making classes and constructors enabled.

Even if you think Javascript is already complicated, that isn't a reason to make it more complex.
Complexity isn't bad. If you have a simple language and you have to something complicated then you still have complexity, it's just expressed differently and uniquely in every code base.

Java doesn't have unsigned integer types because that is "simpler" but that doesn't remove the need to deal with unsigned integers in file formats and network protocols. But now you have to do a convoluted mess of code to deal with that. I'll take a complex language that solves real problems over a "simple" language any day.

That is still pretty simple as far as mainstream languages go.
You are not wrong, but a lot of the stuff you mentioned is literally a non-issue with any modern JS environment.

It feels like any old language gets this way...

  • Zanfa
  • ·
  • 2 months ago
  • ·
  • [ - ]
And ESM vs CJS. What should be an inconsequential transition, has turned into a minefield where adding a dependency to your package.json might blow up your build system, testing library or application without warning. Literally wasted weeks of my life debugging this pile of poop that is the JS ecosystem.
Is this really a JS or a Node issue though? We have no such issues with Bun.
  • Zanfa
  • ·
  • 2 months ago
  • ·
  • [ - ]
I guess technically Node, but in practice JS, since Node is still the de facto standard non-browser JS runtime. "Just use X" where X is some other build tool/runtime/testing ecosystem is another weird trope that's somehow considered acceptable advice in JS, but would be a massive undertaking for any non-trivial project.
Maybe if someone suggests Deno or similar, but Bun is a drop in for Node and is as such completely interoperable with your Node code. Its similar to how you can adopt PNPM easily because it’s a drop in for NPM, where as yarn and others are their own things.
3 ways to declare functions? I am probably blanking but I can only think of:

``` function foo () {} const foo = () => {} ```

  • renlo
  • ·
  • 2 months ago
  • ·
  • [ - ]

    function x() {/* ... */}
    const x = function() {/* ... */}
    const x = function foo() {/* ... */}
    const x = (function() {/* ... */}).bind(this)
    const x = (function foo() {/* ... */}).bind(this)
    const x = () => {/* ... */}
    const x = () => /* ... */
Apart from hoisting (which has little to do with functions directly) and `this` these are all equivalent
  • taosx
  • ·
  • 2 months ago
  • ·
  • [ - ]
Not sure if it counts but there is `new Function("return x;")`
Doesn't `function* ()` count?

After all, you can add a `*` to any existing function without a change in the function or its callers.

4 ways
5 ways, the arrow functions have two different syntaxes:

  () => { return 1; }
  () => 1
That’s still one way - arrow function.
Well, these have the same result, so if the two types of arrow functions don't count as different then neither should these two assignment versions:

  const foo = (function() {}).bind(this);
  const foo = () => {};
Edit: And speaking of assignment versions, there's a new comment that adds a third of the same. I kinda get the feeling a lot of the "multiple ways to declare functions" is just people who don't understand how the pieces of javascript fit together and think these are all independent. They're not. Just declaring a function has only a few ways, but declaring a function and giving it a name multiplies that out to some extent.

In javascript, functions are first-class objects: they can be assigned to variables and passed around just like numbers or strings. That's what everything except "function foo() {}" is doing.

`const foo = function() {}`
Do function expressions count?
const x = { foo() {} }
  • Onavo
  • ·
  • 2 months ago
  • ·
  • [ - ]
Why did JS's with keyword not work out while similar constructs in Python and Ruby were fine?
Python's `with` and Javascript's `with` don't do the same thing. In Python it introduces a context, which is a scope within which certain cleanup tasks are guaranteed, which improves the ergonomics of things that require you to close() at the end, similar to `defer` in Go. In Javascript it allows you to access object properties without prefixing with a name, which leads to confusion about scope.
JavaScript is simple in comparison to other languages. Not many people would disagree.
Yep and TS has all of that plus a gigantic layer of bullshit on top.

The web development community has created the perfect environment for nobody to ever get any work done while still feeling like they're being productive because they're constantly learning minutiae.

  • klysm
  • ·
  • 2 months ago
  • ·
  • [ - ]
Typescript is the best thing to happen to JavaScript since ES6
  • 7bit
  • ·
  • 2 months ago
  • ·
  • [ - ]
Types are great, but I prefer Not having to transpile everything. Stepping throught the TS Code with a Debugger ist non trivial.
You can use 90% of typescript from pure JavaScript: https://www.typescriptlang.org/docs/handbook/jsdoc-supported...
I'm not sure what setup you have, but debugging TS code is pretty trivial. I use vite for all my projects at work and when running the dev server there is seamless integrations with the debugging dev panel in Chrome or Firefox.
  • klysm
  • ·
  • 2 months ago
  • ·
  • [ - ]
Source maps solve this, it’s not a problem in practice
  • dingi
  • ·
  • 2 months ago
  • ·
  • [ - ]
If only it had a sound type system.
It has a very good type system, imo better than most "natively" typed languages.
  • klysm
  • ·
  • 2 months ago
  • ·
  • [ - ]
Hasn’t caused me any practical problems
Yeah I was almost gonna say it makes this proposal feel redundant. I can enforce object shape pretty well using TS, not sure why we need another OOP like thing that tries to do the same.
  • klysm
  • ·
  • 2 months ago
  • ·
  • [ - ]
Typescript doesn’t tell the JS runtime anything about what it knows
[flagged]
Yeah I recommend reading “JavaScript the good parts” and don’t use anything far beyond those. Instead of “compile-time safety guarantees” by these vendor-lock-ins-masquerading-as-open-source, just use the language as it was designed: dynamically, and unit test—because you’re gonna be doing those anyway.
This, except for:

> There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.

There is only one reason to believe JS is simple: because you don't know enough.

I think in this specific case it's JavaScript's requirement for backwards compatibility that bloats it... but there's a lot you can ignore. Like, you can declare a variable with var, let or const but there's absolutely no reason to use var any more. I feel similarly about the proposals to introduce records and tuples: https://github.com/tc39/proposal-record-tuple... in most scenarios you'll probably be better off using records rather than objects, and maybe that's what folks will end up doing.

But boy does it all get confusing.

> Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!

I'm not so sure about that. I think we end up consuming a lot of these features in the TS types that get published alongside libraries. We just don't know it, we just get surprisingly intuitive type interfaces.

> there's absolutely no reason to use var any more.

So I also thought. And then I recently learned that typescript uses `var` internally for performance.

From src/compiler/checker.ts:

    // Why var? It avoids TDZ checks in the runtime which can be costly.
    // See: https://github.com/microsoft/TypeScript/issues/52924
    /* eslint-disable no-var */
    var deferredDiagnosticsCallbacks: (() => void)[] = [];
If performance is so important for your app that `var` is causing issues then JavaScript is likely the wrong language
I can think of a few when hoisting is nice, stylistically:

if (...) var x = ...

else x = ...

/////

try { var x = ...}

catch (error) { x = ... }

/////

for (...) {

  var x: NodeJS.Dict<any> = {}

  x[key] = ...
}

return x

All of those feel like anti patterns to me. Much more difficult to read.
  • wruza
  • ·
  • 2 months ago
  • ·
  • [ - ]
The worst anti-pattern here is the catch- and finally-blocks living in a different scope.

Really, who thought it was a good idea that finalization and error handling blocks must have no access to their subject scope? Every damn language copies that nonsense, except for js and its `var` hoisting.

That's subjective, Idk about "MUCH more difficult".

All it does is moving the declaration to the correct visual scope, instead of a dangling up-front declaration.

Admittedly, I understand most coders are aready trained to read the latter.

> I'm not so sure about that. I think we end up consuming a lot of these features in the TS types that get published alongside libraries. We just don't know it, we just get surprisingly intuitive type interfaces.

Very true. As a downstream consumer, I can do all business logic in ancient, simple languages. But I'm sure these things are extremely nice to have for the more complicated upstream dependencies I rely on.

When they made class declarations imply strict, I thought that was a pretty wise move. But it might have been good if they applied more limitations than that, made them super-strict.

Such as for instance making 'var' not work in class declarations.

ES modules too would have been a great opportunity to cut out some of the outdated cruft.
> Like, you can declare a variable with var, let or const but there's absolutely no reason to use var any more.

I am going to continue to use var for everything, because I think let and const are stupid.

It is not cool or interesting to learn about new scoping rules introduced by let, and it is not cool or interesting that so many people — especially juniors, but not exclusively — are lulled into a false sense of security by believing const means the referenced value is immutable, which it isn't.

I am going to continue to write small JavaScript, like from The Good Parts.

"let" is one of the good parts. Just don't use const or var.
Why is it? Why is block scoping useful?
Because it's how almost every C-syntax-like language works. JavaScript is the odd one out.
I’d prefer to just embrace it. Keep the language small. Prototypical inheritance works just fine. There’s enough in The Good Parts to get all the work done.
> but there's absolutely no reason to use var any more

Naw, var has function scope and hoisting, both of which are useful.

My job isn't to be infatuated with your code. It's to get through your code and get stories done.

People don't really get better at handling the complexity of large code bases. We are fundamentally the same organic matter that existed prior to the first computer coming into existence. So as code bases and library bases grow larger and larger, they need to be proportionately easier to read or even ignore.

Your code needs to be dead boring 90% of the time, otherwise you're imposing on your coworkers. And using variables before they're declared is just shitty behavior.

Sure and I'd accept the weaker claim that "there's no reason to use var in a production codebase touched by many people"
> Your code needs to be dead boring 90% of the time

I also prefer boring code, but I think having the choice between var, let, and const is less boring than only having var.

You should probably accompany this kind of claim with a code snippet to show what we're missing out on.
Not in a sensible codebase
Useful for what?
I've been meaning to write a longer essay on this for years, but I believe the reason for this observation is different cohorts.

Imagine you are a C# programmer just as C# 1.0 is released. C# is a fairly simple language at that time (and similar to other languages you already know), so you can get caught up on it fairly easily and quickly. A few years later, C# 2.0 comes out. It's got a handful of features, but not too much for you to absorb. Likewise C# 3.0, 4.0, etc. As long as you stay on the C# train, the rate of new features does not exceed the rate that you can learn them.

Years later, another person comes along and is new to C#, which is now at version 5.0. They are presented with a huge sprawling language and they have to learn nearly all of it at once to deal with codebases they are contributing to. It's a nightmare. They long for a language that's actually, you know simple.

So maybe they find some other newer language, Foo, which is at 1.0. It's small and they learn the whole thing. After a couple of years of happy productive use, they realize they would be a little more happy and productive if Foo had just one or two extra little features. They put in a request. The language team wants happy users so they are happy to oblige. The user is easily able to learn those new features. And maybe some other Foo users want other new things. 2.0 comes out, and they can keep up. They can stay on the train with 3.0, 4.0, etc.

They never explicitly asked for a complex language, but they have one and they're happy, because they've mastered the whole thing over a period of years. They've become part of the problem that bothered them so much years ago.

Fundamentally, the problem is that existing users experience a programming language as the delta between the latest version and the previous one. New users experience a programming language as the total sum of all of its features (perhaps minus features it has in common with other languages you already know). If you assume users can absorb information at a certain fixed rate, it means those two cohorts have very different needs and different experiences.

I don't think there's a silver bullet. The best you can hope for is that a language at 1.0 has as few bad ideas as possible. But no one seems to have perfect skill at that.

  • pjc50
  • ·
  • 2 months ago
  • ·
  • [ - ]
Speaking as a long time C# developer (and before that, C and C++), every time I try to touch javascript I get that kind of allergic reaction - not because of the language features itself, but the ecosystem. In theory npm and nuget are the same kind of complexity; in practice, all the complexity of C# building disappears into Visual Studio.

A lot of people seem to think that the overall size and "complexity" of the language (and only the language) matters? Personally I don't think it matters how long the spec is if you and your team aren't using those features. The ecosystem matters more. "What should I use to write a GUI in C#?" is a complicated question with tradeoffs, but none of them have anything to do with the language per se.

Nothing is going to compete with C++'s template system for complexity, though.

> in practice, all the complexity of C# building disappears into Visual Studio

IMO, that's even worse.

It means that when you want to learn C#, you're also forced into learning a complicated tool that isn't really useful for much else.

At least when I'm learning Rust or Typescript, I can keep using my existing editor.

> A lot of people seem to think that the overall size and "complexity" of the language (and only the language) matters? Personally I don't think it matters how long the spec is if you and your team aren't using those features.

That works until you have to use code that does use those features.

> The ecosystem matters more. "What should I use to write a GUI in C#?" is a complicated question with tradeoffs, but none of them have anything to do with the language per se.

That's fair. At least to an extent.

The further you stray from the ecosystem's intended use cases, the more you have to depend on the quality of the language itself. Thankfully, for mature, mainstream languages like C#, there are a lot of things you can do before that point.

> in practice, all the complexity of C# building disappears into Visual Studio

> IMO, that's even worse.

To be fair, the more accurate way to phrase it is "disappears into .NET tooling". Because this part is also exposed through standard CLI of .NET, and isn't Visual Studio specific. Managing packages through npm and dotnet is quite similar, with a significant difference that the average dependency graph of a .NET application or a package is 10 to 100 times smaller than the one of Nodejs, and the compatibility breaks happen much, much more rarely.

> It means that when you want to learn C#, you're also forced into learning a complicated tool that isn't really useful for much else.

This is untrue. On top of Visual Studio, your choices are Rider, VS Code and VSCodium, Neovim and Emacs, and anything else that integrates through VSC extension bridges, LSP and debugger adapter protocols.

I also usually recommend to all newcomers to start with CLI to manage projects and packages, because it's more straightforward than navigating through all sorts of windows in an IDE, and because they also get know the basics .NET builds on top of. It's an experience that is very similar to using Cargo.

  • troad
  • ·
  • 2 months ago
  • ·
  • [ - ]
This is spot on, well written, and perfectly reflective of my recent experience doing some recreational language hopping.

Incoherence through gradual accretion of complexity is the probably fate of most non-trivial systems, beyond just programming languages. Individual programs, certainly. Buildings too. (People?)

Also, I am a big fan of your books, Bob! Thank you! :)

Something about OP didn't strike me quite right, but your explanation here really nails it, I think. Especially because I can see that I'm in quite an old JS cohort - and quite happy with the language as a result - but if I were to start coding in JS yesterday I think I would gnash my teeth and tear out my hair.
This is quite a compelling story, but thinking about it, I don’t fully agree.

There’s more than one language that I initially disliked, and only learned to like after some of (what I saw as) the glaring flaws were fixed. After they added more features!

For one, Objective-C. I didn’t like it at all until they added ARC, removing the need for manual addRef / release everywhere. After that, coming to the language pretty late, I came to like Obj-C quite a lot.

JavaScript is another one. For a long time I thought it was an awful language and avoided using it, but a few things have brought me round and now I really like it:

- modules

- async/await

- TypeScript, if you’ll allow that as a “JavaScript feature”

I even quite like JS classes, although I could live without them.

Simplicity is good, but power and expressiveness are also good.

Been using .net since 2.0 and nah C# has jumped the shark. Primary constructors are a very poorly designed feature that for some reason was added in the last version.

The new-ish yearly release cycle I think is mostly to blame, they feel like they need to add some headline features every year but the team also, maybe due to org-chart politics, seems to not really able to make deep runtime level changes that are needed to actually add anything useful so they just add syntax sugar every year bloating the language.

The emphasis on syntax sugar has a very useful side effect, which is that new language features can be used on old runtimes. To this day some of the newer C# features are usable when targeting the ancient .NET Framework 4.x, like 'ref returns'. This would not be possible if every new language feature was paired with runtime-level changes. (Many new language features do come with changes to the runtime and BCL). I support a bunch of people who use NET4x to this day and I'm able to write modern C# for that target thanks to the language and compiler being designed this way.

A lot of stuff is also designed to be independent of library changes - IIRC for example if you use nullability, the compiler will emit the Nullable attribute's definition into your .dll as a hidden class, so that your library will work even on older versions of the runtime with older base class libraries. Doing this complicates the compiler (and adds a tiny, tiny amount of bloat to your dll) but means that more people can adopt a new feature without having to think about upgrading their SDK or runtime.

My personal opinion is that if a change can be done adequately entirely at the compiler level without runtime/library changes, it should be done there. It allows the people working on the language, libraries and runtime to iterate independently and fix problems without having to coordinate across 3 teams.

> Primary constructors are a very poorly designed feature that for some reason was added in the last version.

I upgraded to .NET 8 recently and I love primary constructors. I don't use them everywhere but they are great for dependency injection or for small classes.

Beware the footguns!

https://mareks-082.medium.com/dark-side-of-the-primary-const...

Let's just say they could have done a much better job on it. It feels rushed and haphazard for such a mature language.

It doesn't seem that bad -- the lack of readonly would be my only concern out of that article and one I didn't actually consider.

I think, as a feature, this is sort of the MVP. They could have done a better job of it by adding more to it (e.g. maybe allow the readonly modifier on the constructor properties). It's hard to imagine them being able to take anything away from primary constructors that would make it better.

  • mdhb
  • ·
  • 2 months ago
  • ·
  • [ - ]
Just as a quick side note this is actually one of the things I’ve come to appreciate most about some of the work you and the others have done with Dart where it very clearly has gotten much more powerful and has had to deal with some major changes in requirements over the years as well but on the whole I feel with only a few exceptions the complexity doesn’t feel like it’s gotten away from me or the community at large at all. It’s very obvious to just look at it and see that a tremendous amount of work has gone into the language design itself and just figured now would be a good time to offer my appreciation for that.
Thank you!

We try really hard. I'm always worried about pouring too much complexity in and alienating new users. At the same time, we also want to serve existing users who really do benefit from new features. It's a tricky balancing act.

I can't speak for how C#; but in C++'s case, the issue is that there's a lot of programmers who don't keep up with the language that they're using. As a result, you get a few people pushing the language ahead, who are deeply involved in its future. And then the vast majority of people are still using C++03, and it's still taught the same way as it was ~20 years ago.

I think the only way to address what you're alluding to is to continually deprecate small parts of the language, so that upgrading is manageable for active codebases. And you probably have to be really aggressive about pushing this forward, because there will always be excuses about why you should hold back just this one time and this one feature is an exception that needs to be held back just a little bit longer.

But in the long run, if you don't force people to change a little bit continuously, it will become a big enough issue to split the ecosystem. See python2 to python3. Or you end up forced into supporting bad practices for all eternity, like C++. And having to take them into account for every. Single. New. Feature.

Further raising the barrier to entry to participation in developing the language to people who are completely focused on its development and have unusual mastery of it, who can't identify with the people struggling with its complexity.

If not at the technical level, then at the business level, where people definitely don't have the time to understand why it'd be safer for the go-to heap allocation method should return a scoped pointer instead of a raw pointer.

Unfortunately, this probably is only viable for strongly-typed languages like C#; for loosely-typed languages like Python, the risk of iterative changes is that if someone moves a codebase several generations ahead at once, it'll introduce lots of subtle changes that'll only become obvious once certain codepaths are exercised...and given how bad testing coverage is for a lot of software, that probably risks breakages only occurring once it's deployed, that are nontrivial to discern via reviews or even static analysis.

  • pjc50
  • ·
  • 2 months ago
  • ·
  • [ - ]
> continually deprecate small parts of the language

Did you see https://news.ycombinator.com/item?id=41788026 ?

"My concern, and IMO what should be the overwhelming concern of the maintainers, is not the code that is being written, or the code that will be written, but all the code that has been written, and will never be touched again. A break like this will force lots of python users to avoid upgrading to 3.17, jettison packages they may want to keep using, or deal with the hassle of patching unmaintained dependencies on their own.

For those Python users for whom writing python is the core of their work that might be fine. For all the other users for whom python is an foreign, incidental, but indispensable part of their work (scientists, analysts, ...) the choice is untenable. While python can and should strive to be a more 'serious', 'professional' language, it _must_ have respect and empathy for the latter camp. Elevating something that should be a linter rule to a language change ain't that."

Strongly phrased, but planned obsolescence in a language is really expensive. You're basically quietly rotting the work of your users, and they will hate you for it.

I note that C# basically hasn't deprecated any of the language itself, the dotnet core transition was a big change to the runtime. And that was expensive enough, again due to dropping a lot of old libraries.

But they can simply continue using whatever version of Python works for them, right?

I never got this argument, personally. Sure, having to rip out a bunch of code because of lang-level changes can suck, but you also only really need to do that if you're starting a new project anyway or for whatever reason want to keep being pinned to @latest version of the lang.

If you're a researcher who uses Python 2 as a means to an end, then just stick to Py2. It's unreasonable to expect the entire python world to freeze in place so you don't have an annoying migration journey. If you need the latest Py3 features, them's just the brakes I'm afraid, eventually APIs need to change.

No, because features get added to new versions of Python, and libraries they need to use may depend on those features. “Just don’t upgrade Python” is advice that’s only going to last a finite amount of time before introducing pain points and/or risk.
  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
Not keeping up with the language is quite common in traditional enterprises, that is why you get shops still doing Python 2, Java 8, .NET Framework (C# 7.x), C89, C++98,....

People get paid to keep running what already exists, not to write new stuff.

Usually new stuff only comes to be if there is a new product being added into the portfolio, and most of the time it comes via an aquisition or external contractors, not new development from scratch in a cooler version of the stuff they are using.

> When Typescript first came out, it was great. Types in Javascript are something we've always wanted. Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!

TypeScript today can be written the same way that TypeScript was when it first started to become popular. Yes there are additions all the time, but most of them are, as you observe, irrelevant to you. They're there to make it possible to type patterns that would otherwise be untypeable. That matters for library developers, not so much for application developers.

To the extent there's a barrier to entry, it seems largely one that can be solved with decent tutorials pointing to the simple parts that you're expected to use in your applications (and a culture of not overcomplicating things in application code).

  • tgv
  • ·
  • 2 months ago
  • ·
  • [ - ]
TS is pretty good. In the beginning, I had to resort to 'any' frequently, but nowadays it's often possible to avoid that by writing some auxiliary types. It's not easy, but you can get quite far. JS without TS' safety net is a nightmare...
  • d13
  • ·
  • 2 months ago
  • ·
  • [ - ]
As a very experienced TS developer myself - this is absolutely correct, the best reply in this thread, and I hope the OP reads it.
> The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.

That's funny given many of the changes were made to make C# look more like JavaScript!

C# 6 introduced expression-bodied members for simplified syntax (like JavaScript), null-conditional operators, and string interpolation. C# 7 brought pattern matching, tuples, deconstruction, and local functions. C# 8 introduced nullable reference types for better null safety, async streams, and a more concise switch expression syntax. C# 9 to C# 12 added records, init-only properties, with expressions, and raw string literals, global using directives, top-level statements, list patterns, and primary constructors.

In C#, if you need a string list you can do:

    List<string> items = [];  // Not as concise as JS but type safe.
As for TypeScript, nobody is supposed to use most of it -- unless you're authoring a library. You benefit from it's features because somebody else is using them.

Languages draw inspiration from each other -- taking the good parts and incorporating them in. C# is a vastly better, easier, and safer language than it used to be and so is JavaScript.

This is why always say the true beginner programming language is C.

Stupid easy to learn, have some loops, have some conditions, make some memory allocations. You will learn about the fundamentals of computing as well, which you might as well ignore (unknowingly) if you start with something like JavaScript (where is this data living in my computer?).

  • lukan
  • ·
  • 2 months ago
  • ·
  • [ - ]
And this is why I always say, we have a world full of computer consumers, not programmers.

C as a first language is only easy, if you happen to bring along a deep technical interest (and pre knowledge) about the "technical fundamentals of computing".

Most people do not have that.

Tell them about heap and memory allocations and you will get a blank stare.

But show them some simple functions, to make some flashing graphics on the sceen - and they will have fun. And can learn the basics of programming at the same time.

And then you can advance more low level, for those who feel the call. But please don't start with it, unless you have some geeky hacker kids in front of you who really want to learn computers. Then C makes sense. For "normal" people not so much.

Excel is the best first language if you just want a world full of computer programmers.

But there is an implicit context here around those who want to program alongside the professionals. That comes with wanting some deeper understanding of the machine.

  • jerf
  • ·
  • 2 months ago
  • ·
  • [ - ]
I'm not saying this because I like Go per se, though one might argue I like Go because I can say this, but "Go, but stay away from all concurrency for a while" is becoming my go-to recommendation for new programmers. Fewer foot guns, it's close enough to the metal you can learn about it (there's even an assembler you can play with if you want to), and despite being 15 years old is essentially not the result of 15 years of language-construct-astronautics. It also lacks a lot of the accidental complexity of C in the compilers, using new modules is "go get" rather than essentially learning a new sublanguage, a lot of advantages.

But do stay away from the concurrency. I occasionally get flack on that point, but try to remember back to your programming days when you were having enough trouble keeping track of how one instruction pointer was flowing; it doesn't help to immediately try to keep track of multiple. Gotta recover the novice mindset for a moment when recommending programming langauges.

I used to recommend Python, as many others did. Your cited disadvantages of such languages are certainly true, but Python used to make up for it with the ability to do real work relatively quickly, and while it may not have taught you how the machine worked, it did a good job of teaching programming. But now... well... Python was my primary hobby language for about 8 years around 2000-2008. I'm fluent in Python. I wrote metaclasses. I wrote a bit of a C module. And I can still read it, because I do check in from time to time. But it's not the same language anymore, and almost every change it has made has made it harder to recommend as a new language. It used to be the simple alternative to Perl that still had most of the power... now I think it's harder to read than a lot of Perl 5, what with all the constructs and the rules about what happens and the difficulty of resolving what a given line is going to do with all the ways of overloading and decorating and overriding everything. And the culture of having all this power, but using it selectively, is gone from what I can see; now it's "we have all this power and what a shame it would be not to use it".

The problem with C is that beginners generally want to build something.

"Oh, you want to build an app that does X? Well, first learn C for three months and then switch to Python/Javascript/etc. to build the thing that motivated you in the first place" doesn't fly.

Aye, especially nowadays with the ubiquity of computing - "hello world", or console apps in general, aren't the most enticing of projects anymore.
How can you teach C when there's no list of UB, there's sometimes no agreement on how to read the standard, and loads of non-standard-compliant compilers.
Plenty of universities teach C every day even if that means specifying a compiler, and usually it’s a very boring compiler that gets chosen
Right, but all C courses I've done had UB in examples without mebtioning it. There teachers just didn't know what they are teaching.
Unless you are writing a compiler and code specifically tailored to take advantage of UB, there’s not much to say about it other than “be careful about UB cause it can make your program buggy from your POV.”
> How can you teach C when there's no list of UB, there's sometimes no agreement on how to read the standard, and loads of non-standard-compliant compilers.

Right. Becuase no one ever learned C as a first language ever, and those that paradoxically did were worse programmers for it!

Are you saying people who learned C as their first programming language are better programmers or worse?
> Are you saying people who learned C as their first programming language are better programmers or worse?

That's both a false dichotomy and irrelevant as well.

My message is "There are multiple excellent (even legendary) developers in the short history of our field that learned programming in C. There are many more who primarily used C".

This refutes your point completely.

From that perspective, something like Pascal or Modula-2 is much better - you get all the same stuff but no footguns.
Actually yeah, I'd say even something like zig would work but that's pushing it a little bit in terms of feature complexity.
> And nobody uses most of it!

Everybody who does Express, React, or any other popular advanced libraries with TypeScript is using these features. Some things are simply more useful to libraries than line of business code - that's fine. The line of business code is much better thanks to it.

> Everybody who does Express, React, or any other popular advanced libraries with TypeScript is using these features.

This is very true and my original post was short sighted. You could, of course, make most upstream dependencies without modern language features. However, their complex jobs get much easier with these features.

Downstream, business logic is much easier to implement without these features compared to complex, low level functionality.

For sure! In a basic API endpoint, I don’t need advanced typescript features.

But if I’m writing a module that a lot of other consumers in the codebase will use, and I want to make their lives easy, I might use a lot of advanced TS features to make sure than type safety & inference works perfectly within the module. Whoever consumes it can then rely on that safety, but also the convenience. The module could have some convoluted types just to provide really clean and correct auto-complete in a certain method. But most people don’t need to worry about how that works

  • ·
  • 2 months ago
  • ·
  • [ - ]
Yeah I was confused by this point as well. Especially because many of the recent Typescript releases are just improving performance or handling more cases (without needing to learn new syntax).
React and Expressjs predate typescript, Expressjs considerably so.
Doesn't matter, I'm talking about the type definitions - @types/react, @types/react-dom and @types/express.
No, those are optional for the enduser to ever encounter.
I never said it's required. The typings are really useful if you want to use these libraries "with TypeScript" as I said in my first comment... The typings are the whole point - that's where the advanced type features are used, and every user benefits - their own code can be much simpler and safer thanks to it.
"Everybody who does Express, React, or any other popular advanced libraries with TypeScript is using these features. Some things are simply more useful to libraries than line of business code - that's fine. The line of business code is much better thanks to it."

I think your model of how people use modules is flawed.

I doubt most people using those modules are using typescript to mostly interact with them, because of the perceived subjective benefit you see of typing everything.

For example, I use many typescript-written modules without using typescript in the code that uses them, and am better off for it. Because I and my R&D work does not want the advanced features of typescript. We can switch to it, or a OOP server language if that is useful later.

Exposing types to me usefully in libraries to use "with Typescript" as you claim means my own code has to be typescript. In that case, to avoid compile errors and a wall of "any" types, I reasonably have to switch my own code to use Typescript classes etc, even where this is just bloat etc. Another reason I have libraries is to do things without ever interacting with them other than input props (e.g. a drag'n'drop library with JSX components). In that case, the type (JSX Component) is irrelevant to me to include, and for experienced developers, approximately 0% are going to give something other than a JSX component as an input to a drag'n'drop library, etc.

In other words - I derive benefit from them using Typescript without having to use it myself. Pushing Typescript as "necessary" because popular libraries have interfaces is exactly the kind of thing that slows down R&D and fast processes.

I have used many languages with types for many years. I understand their value. However, much of the value is code coherence, working with other people, and domain models being embedded in the code. These benefits are not always useful in small web applications.

Typing is one of those things... you love it to make your life learning code easier and for big projects, and for certainty when you are coding boring things. For other things in life, there's more to life than writing type definitions and overloading methods. You can be much more productive just using primitives in some scenarios and make research discoveries faster and with more flexibility.

What I have seen is every generation of coders, a new type-heavy language/framework becomes popular (.NET, Java, Typescript), then it becomes "uncool" because people realize how bulky and useless most of it is - especially for anything small/research-y, then it loses adoption and is replaced by another.

What did Bjarne Stroustrup supposedly say? There are two kinds of programming languages: the ones everybody complains about, and the ones nobody uses.

I'll put on my Scheme hat and say "with hygienic macros, people can add whichever language features they want." Maybe Rust is a good experiment along those lines: C++ with hygienic macros.

Everything that people keep using grows into a monster of complexity: programming languages, software, operating systems, law. You must maintain backward compatibility, and the urge to add a new feature is too great. There's a cost with moving to the new thing -- let's just put the new thing in the old thing.

I'm an absolute beginner when it comes to programming, and I chose C# as my first language to learn.

I've been learning steadily for 8 or so months now and at no point have I felt the language was unapproachable due to excessive features.

Looking back on what each new version added, I don't think any of the additions were damaging to the simplicity of C#.

I do likely have a biased perspective though, as I use newer C# features every day.

  • ygra
  • ·
  • 2 months ago
  • ·
  • [ - ]
> I do likely have a biased perspective though, as I use newer C# features every day

I think that is kind of the point, though. Many of those newer features help with simplifying code and making it less boilerplate-y. To old programmers it is a simple code fix in the IDE to move from 30 lines of variable assignments in a switch to a 5 lines switch expression and they can learn that way. People new to the language typically won't even consider going the complicated route because they learned an easier way first.

I do concede that having people with less C# experience on a team where modern C# is used, there will be constructs that are not immediately obvious. SharpLab has an “Explain” mode which would be helpful in such cases, but I haven't seen anything like that in IDEs: https://sharplab.io/#v2:C4LgpgHgDgNghgSwHYBoAmIDUAfAAgBgAJcB...

However, as a personal anecdote, we've had a number of developers who have written mostly Java 1.4 (technical reasons) before switching to C# about a year ago. They took up the newer features and syntax almost without problems. Most questions I got from them were along the lines of “Can we also use this feature?” and not “What does this do?”.

It doesn't help how arcane the TS documentation is. Important docs live as frozen-in-amber changelog entries; huge tracts of pages "deprecated" yet still #1 on Google.

Google "typescript interfaces." #1 is a page that has been deprecated for years. How did this happen?

AWS has that issue too - v1 documentation takes precedence over v2, the same with bootstrap, too. I suspect google algorithms don’t quite understand deprecation / latest version prioritisation.
Can tell you right now a lot of important information about mapped types live in Github issues on the TS repo
Documentation where, somehow, every single thing you can find for some particular need is “deprecated” and it’s weirdly-difficult to find a complete set of docs not full of deprecated landmines mixed in with the current stuff, is kinda a Microsoftism.
Try Go. Go is really stable as a language and have a very small core feature set.
This is easily the most appealing thing to me about Go. I learned Go through the "Learn Go with Tests" way and I had a ton of fun.

It is hard for me to recommend using Go internally since .NET/Java are just as performant and have such a mature ecosystem, but I crave simplicity in the core libraries.

Here's the link for anyone considering learning Go: https://quii.gitbook.io/learn-go-with-tests

.NET/Java are only as performant as Go if you completely ignore memory usage and focus only on time.
OpenJDK and .NET compilers run circles around Go one. It's not even close. The second you go beyond "straight-line" code where function body has limited amount of locals and does not make much calls, the difference becomes absolutely massive. Go also does not do any sort of "advanced" devirtualization that is bread and butter of both to cope with codebase complexity and inevitable introduction of abstractions. Hell, .NET has surpassed Go in compilation of native binaries too. Here's a recent example: https://news.ycombinator.com/item?id=41234851

In terms of GC, Go has specialized design that makes tradeoffs to allow consistent latency and low memory usage. However, this comes with very low sustained allocation and garbage collection throughput, and Go the language itself does not make it necessarily obvious where allocations happen, so, as sibling discussions here and under Go iterators submission indicate, this results in the amount of effort to try to get rid of all allocations in a hot path that is unthinkable in C#, which makes it much more straightforward, and is also able to cope with high allocation throughput with ease, much like Java.

It is indeed true that Java makes different design choices when tuning its GC implementations, but you might see much closer to Go-like memory usage from .NET's back-end services now that DATAS is enabled by default, without the tradeoffs Go comes with.

Yep, for expert driven projects, such as Go and C#, it is nearly always a case of "everything is a tradeoff".

Another good article for comparing GC between Go and C# https://medium.com/servicetitan-engineering/go-vs-c-part-2-g...

Noting that the article's findings from 2018 need to be re-evaluated on up-to-date versions before deriving conclusions because in the last 6 years (and especially in the last 3 or so for .NET) the garbage collector implementations of both Go and .NET have evolved quite significantly. The sustained multi-core allocation throughput graph more or less holds but other numbers will differ significantly.

One of the major factors that play in Go's favour is the right attitude to architecting the libraries - the zero-copy slicing is much more at the forefront in Go than in .NET (technically incorrect but not in terms of how the average impl. looks like), and the flexible nature of C# combined with it being seen as "be glad we even support this Microsoft's Java" by many vendors lead to poor quality vendor libraries. This results in the experience where developers see Go applications be more efficient, not realizing that it's the massively worse quality implementation of a dependency their .NET solution has to deal with (there was a recent comparison video, where .NET was estimated to be slower, but the reality was that it wasn't .NET but the AWS SDK dependency and the benchmark author being most familiar with Go and making optimal choices with significant impact there like using DB connection pooling).

I'm often impressed by how much punishment GC and compiler can take, continuing to provide competitive performance despite massive amounts of data reallocations and abstraction bloat thrown at it by developers who don't want to even consider to approach C# in an idiomatic C# way (at the very least by listening to IDE suggestions and warnings). In some areas, I even recommend to look at community libraries first which are likely to provide far superior experience if documentation and brief code audit indicate that its authors care(tm) which is one of the most important metrics.

> Go also does not do any sort of "advanced" devirtualization

Depends on the implementation. gc doesn't put a whole lot of effort into optimization, but it isn't the only implementation. In fact, the Go project insists that there must be more than one implementation as part of its mandate.

GoGC is the fastest overall implementation and the one that is being used in >95% cases, with the alternatives not being-up-to-date and producing slower code, aside from select interop scenarios.

Until this changes, the "Depends on the implementation" statement is not going to be true in the context of better performance.

  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
Specially if one is ignorant on how to write proper .NET and Java code.
And startup time. JIT languages are a bad match for command line applications, for example.
.NET can do ahead-of-time compilation now, there are a few gotcha's but it's usable.

https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...

This is almost as bad as saying .NET is windows only.
I would be a huge fan of go, but json is just too big a hassle to deal with in go compared to JavaScript.
Go is an excellent language for people who turned off C# or Typescript, for better or worse.
  • tgv
  • ·
  • 2 months ago
  • ·
  • [ - ]
That's not me, and I use it. I like TS, but in the browser. It has not much use elsewhere, certainly not in the backend. Go is not only simple and stable, it's quite flexible, has a good eco-system, a wonderful build system, and is really fast and light at runtime.
  • adamc
  • ·
  • 2 months ago
  • ·
  • [ - ]
Java went through this too, although there, a lot of it is part of the ecosystem. See https://chrisdone.com/posts/tamagotchi-tooling/
The java tooling is the number one thing I hate about using the language. It's all just bad.

Then. You get forced into using intelij because it seems to smooth over a lot of the toolings problems with "magic".

It's horrible.

Saying this as probably the biggest Java fanboy I know: they are pretty bad. Gradle is pretty much the worst build system Ive used. IntelliJ might as well be folded into the JDK, because I don't think it's possible to be productive in Java without it.
Everything you said applies to Kotlin as well. Outside of IntelliJ, it is horrible to use. (Ditto Swift outside of Xcode.)
  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
I use Eclipse and Netbeans of free will.
> instead of exclusively focusing on making them easier to use or more performant, they are constantly adding features

I appreciate that this is mostly just a generic rant, but it's not really suitable here, because this is a feature which is being added with the sole goal of improved performance.

There's only so much you can to optimize the extremely dynamic regular objects in JS, and there's no hope of using them for shared-memory multithreading. The purpose of this proposal is to have a less dynamic kind of object which can be made more performant and which can be made suitable for shared-memory multithreading.

Do you have examples of unreadable C#? The language didn’t change much IMHO. You have new features, like records, but C# code looks pretty much like what I started with in 2009
Now that I'm thinking about it, most of it is probably .NET bloat instead of C# bloat, but a few examples would be global usings, file scoped namespaces, records, target-typed new expressions, null coalesce assignments, etc. It's nothing huge, but combined with .NET bloat it can be overwhelming when you haven't worked in .NET for a while.
All the things you describe make C# more readable and easier to understand!

Are you really confused by file scoped namespaces or target-typed new or even null coalesce assignments?

You don't have to use them -- although Visual Studio will helpfully suggest places you can use them.

If I had never seen a pattern match switch statement before (and there was a point where I didn't) it's sort of immediately obvious what it does.

and pattern matches, primary constructors, range operator, switch expressions, etc. it does add up
  • eknkc
  • ·
  • 2 months ago
  • ·
  • [ - ]
I came to love pattern matching so much that now when I write TypeScript I get frustrated. It is a weird balance.
All of this looks and feels like C#. That doesn’t look unreadable or a completely different language. In fact they end up making C# more readable by removing boilerplate
This one threw me off when I first saw it:

(int x, string y) = (default, default);

> One of the reasons I have so much fun working in node/Javascript these days is because it is simple and not much has changed in express/node/etc for a long time. If I need an iterable that I can simply move through, I just do `let items = [];`. It is so easy and hasn't changed for so many years. I worry that we eventually come out with a dozen ways to do an array and modern code becomes much more challenging to read.

The let keyword didn't exist in JS when Node was first released, nor did for/of, which while unstated in your post, is probably what you are thinking of when you posted this. The language has not stayed the same, at all.

>The first time I ever used C# was probably version 5? Maybe? We're on version 12 now and there's so much stuff in there that sometimes modern C# code from experts looks unreadable to me.

The funny thing is if you used F# over a decade ago almost all the C# improvements seem familiar. They were lifted from F#, some of them badly.

And I know F# borrows a lot from OCaml. But it's hard to fathom why we need to use the badly adopted F# features in C# instead of just getting F# as a main Microsoft adopted language.

> sometimes modern C# code from experts looks unreadable to me

This is a culture issue and has always existed in C#, Java and C++ communities sadly (and I'm seeing this now with TS just as much, some Go examples are not beacons of readability either, I assume other languages suffer from this similarly).

In the past, people abused BinaryFormatter, XML-based DSLs, occasionally dynamic, Java-style factories of factories of factories, abuse of AOP, etc. Nowadays, this is supplanted by completely misplaced use of DDD, Mediatr, occasional AutoMapper use (oh god, at least use Mapperly or Mapster) and continuous spam of 3 projects and 57 file-sized back-ends for something that can be written under ~300 LOC split into two files using minimal API, records and pattern matching (with EF Core even!).

Neither is an example of good code, and the slow but steady realization that simplicity is the key makes me hopeful, but the slow pace of this, and new ways to make the job of a developer and a computer more difficult that are sometimes introduced by community and libraries surrounding .NET by MS themselves sour the impression.

Couldn't agree more. More features in a programming language makes it easier and more fun to write code, but makes it harder to read and maintain someone else's code. Considering more time is spent maintaining code as opposed to writing it (assuming the product is successful), readability is more important than writability.
> it has a huge barrier to entry

You don't have to use every feature of the language. Especially not when you are just learning.

> Now, Typescript is on version 5.6 and there is so much stuff you can do with it that it's overwhelming. And nobody uses most of it!

Exactly. But no-one seems to be arguing that typescript has a huge barrier to entry.

> Now look at modern C++, it's crazy powerful but so jam packed that many people have just gone back to C.

Geez I'd sure hope not.

If you liked C++11, you can use C++11. Every compiler, platform, and library will support it.

No one erased it and made you go back to C99.

I also don’t know how to refine my thought but it’s something along the lines of:

The people who are in a position to decide what features get added to a language are usually top experts and are unlikely to have any reasonable perspective on how complicated is too complicated for the rest of us.

If you live and breathe a language, just one more feature can seem like a small deal.

I think it becomes much more reasonable when that one more feature enables an entire set of capabilities and isn’t just something a library or an existing feature could cover.

For many languages these days, evolution happens in public, and you can look at those discussions and see how this sausage is made. If you do it for C#, if anything, the pattern is that relatively inexperienced users are the ones who propose features (to make some specific pet peeve of theirs easier), while the more experienced folk and especially the language designers who make the final call aggressively push back, point out corner cases, backwards compatibility, maintenance burden etc.
And here is the obligatory quote from Bjarne Stroustrup:

"There are only two kinds of languages: the ones people complain about and the ones nobody uses."

  • ·
  • 2 months ago
  • ·
  • [ - ]
Every programming language attempts to expand until it becomes C++. Those languages which cannot so expand are replaced by ones which can.
Go will resist this as long as possible.
Lua too. 30 years and counting!
> Microsoft has a large team dedicated towards improving these languages constantly

… and the people working on these projects need to deliver, else their performance review won’t be good, and their financial rewards (merit increase, bonus, refresher) will be low. And here we are.

Edit: I realize I’m repeating what you said too, but I wanted to make it more clear what’s going on.

From what I've been told, all the nice bonuses and career opportunities are in Azure and other, more business-centric areas. You go to DevDiv to work on Roslyn (C#) or .NET itself because you can do so and care about either or both first and foremost.
At least typescript tooling hasn’t changed. It was a pain to set up when it came out and it still is.

At least we moved past webpack mostly.

> or more performant

Obviously then can't make TS more performant (since it doesn't execute) but C# is very performant and even surpasses Go in the TechEmpower benchmarks.

Absolutely. I love C# and .NET, they are incredible and very fast. I just meant to say that they aren't only focused on performance, but also focused on new features.

One of the best things .NET did was adding minimal APIs in .NET 6 (I think) that are more like Express. They removed a lot of boilerplate and unnecessary stuff, making it easier to start building an API.

  • o11c
  • ·
  • 2 months ago
  • ·
  • [ - ]
TS is pretty impressive in that its compiler is slower than C++ compilers though.
I think part of the reason C# has changed so much as far as the language goes, not the CLR is actually because they took so many good things from Typescript and mixed them into the language. I think part of the reason Typescript has become so cumbersome to work with is because it has similarly added a lot of the good things from C#. Which may sound like a contradiction, but I actually agree with you that plain JavaScript is often great. That being said, you don’t actually have to use all the features of Typescript and it’s still much better for larger project in my opinion. Mostly because it protects developers from ourselves in a less “config on organisational level” way.

We already use regular JS for some of our internal libraries, because keeping up with how TS transpires things into JS is just too annoying. Don’t get me wrong, it gets it right 98% of the time, but because it’s not every time we have to check. The disadvantage is that we actually need/want some form of types. We get them via JSDoc which can frankly do almost everything Typescript does for us, but with much poorer IDE support (for the most part). Also more cumbersome than simply having something like structs.

I recently tried out a very simple language called Gleam. It's a functional programming language running on the BEAM vm, it may appeal to you.
C# since version 2 here, so I’m probably older. You said a lot of words, but gave no concrete examples of what’s bad about these languages. Linters will let you turn off different syntax usages based on your preference on what is readable or not, and C# is the only language I’m aware of where you can build them into the compilation chain and literally cause the compilation to halt instead of merely giving a style warning.
You don't have to use features you don't understand. "Complex" features exist for a reason. To the uninitiated something like generic types are quite inscrutable, but when you encounter the type of problem that they solve, their use becomes much more intuitive, and eventually familiarity yields an understanding and generics reveal themselves to be quite conceptually simple, they're just variables for types.
  • edem
  • ·
  • 2 months ago
  • ·
  • [ - ]
just use haxe
  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
C23 has just been ratified, and C2y has already quite a few proposals.

Programming languages are like any other software product, evolution or stagnation.

Eventually they might implode, however whatever comes after will follow the same cycle yet again.

And branches [1] of C are still spawning and gaining traction, because C++ is perceived as overkill.

[1]: https://github.com/c3lang/c3c

  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
Spawning, yes. Gaining traction beyond a couple of blog posts in sites like HN, not really.
  • lxe
  • ·
  • 2 months ago
  • ·
  • [ - ]

    𝅘𝅥𝅮𝅘𝅥𝅮𝅘𝅥𝅮𝅘𝅥𝅮 

    They've got decorators, record tuples, shadow realms, and rich rekeying
    Dynamic imports, lazy modules, async contexts now displaying

    JSON parsing, destructure privates, string dedenters, map emplacers
    Symbols pointing, pipe operators, range iterators, code enhancers

    Eager asyncs, resource tracking, strict type checks, and error mapping
    Phase imports, struct layouts, buffering specs for data stacking

    Temporal zones, buffer edges, chunking calls for nested fragments
    Explicit locks, throw expressions, float16s for rounding segments

    Base64 for typed arrays, joint collections, parsing pathways
    Atomic pauses, void discarding, module scopes for seamless relays

    Math precision, tuple locking, module imports, code unlocking
    Source phase parses, regex bounds, iterators kept from blocking

    Iterating, winding modules, atomic gates with locks unbound
    Helper methods, contexts binding, async helpers, code aligning

    Soffit panels, circuit brakers, vacuum cleaners, coffee makers
    Calculators, generators, matching salt and pepper shakers

    I can't wait, (no I) I can't wait (oh when)
    When are they gonna open the door?
    I'm goin' (yes I'm) goin', I'm a-goin' to the
    ECMAScript Store
  • ttul
  • ·
  • 2 months ago
  • ·
  • [ - ]
(Continued)

Proxy traps and symbol iterators, BigInts for calculations greater Nullish merging, optional chaining, code that's always up-to-date-ing

Temporal parsing, binary shifting, WeakRefs for memory lifting Intl APIs for global fitting, Promise.any for fastest hitting

Private fields and static blocks, top-level awaits unblock the clocks Logical assignments, numeric seps, each update brings new shocks

Array flattening, object spreading, RegExp lookbehinds not dreading Class fields, global this, and more, the features keep on threading

I can't wait, (no I) I can't wait (oh when) When will they add just one feature more? I'm coding (yes I'm) coding, I'm a-coding with the ECMAScript lore

Fun weekend project idea: create an MDN docs clone with a spirit of above and https://git-man-page-generator.lokaltog.net/

It could also include babelrc and eslintrc generator as proposed in another comment below.

10/10
  • lxe
  • ·
  • 2 months ago
  • ·
  • [ - ]
Thanks. I put 37 actual proposals in here.
automatic circumsizers when
  • lxe
  • ·
  • 2 months ago
  • ·
  • [ - ]
It's stage 0 for now but there are polyfills.

In .babelrc do

    {
      "presets": [
        [
          "@babel/preset-env",
          {
            "targets": {
              "esmodules": true
            },
            "include": ["es.autocirc"] 
          }
        ]
      ]
    }
The general idea of types with a fixed layout seems great, but I'm a lot more dubious about the idea of unsafe blocks. The web is supposed to be a sandbox where we run untrusted code and with pretty good certainty expect that it can't crash the computer. Allowing untrusted code to specify "hey let me do stuff that can cause data races if not done correctly" is just asking for trouble, and also exploits. If shared structs are going to be adopted I think they probably need to be immutable after creation, or at the very least only modified with atomic operations.
  • syg
  • ·
  • 2 months ago
  • ·
  • [ - ]
The ability to do unordered operations on shared memory is important in general to write performant multithreaded code. On x86, which is very close to sequentially consistent by default (it has something called TSO, not SC), there is less of a delta. But the world seems to be moving towards architectures with weaker memory models, in particular ARM, where the performance difference between ordinary operations and sequentially consistent operations is much larger.

For example, if you're protecting the internal state of some data structure with a mutex, the mutex lock and unlock operations are what ensures ordering and visibility of your memory writes. In the critical section, you don't need to do atomic, sequentially consistent accesses. Doing so has no additional safety and only introduces performance overhead, which can be significant on certain architectures.

  • syg
  • ·
  • 2 months ago
  • ·
  • [ - ]
Author here. I hear your feedback about unsafe blocks. Similar sentiment is shared by other delegates of the JS standards committee.

The main reason it is there today is to satisfy some delegates' requirement that we build in guardrails so as to naturally discourage authors from creating thread-unsafe public APIs and libraries by default. We're exploring other ideas to try to satisfy that requirement without unsafe blocks.

There's already a precedent of ownership on transferred objects. Why not have an `.unsafe(cb)` method on the structs? Error if you don't have ownership, then use the callback to temporarily acquire ownership. At least to me, it's more intuitive and seems idiomatic.
bike-shedding but you should consider renaming them from "unsafe" to "volatile" or some other word that expresses that they are not unsafe to the user/browser/os. They are only changeable by other threads.

The word "unsafe" will be picked up as meaning "can infect your computer" which we can already see examples of these messages.

Isn't it really no different to what you can already do with WASM threads though? C/C++ or unsafe Rust compiled to WASM can have data races, but the worst it can do is crash the WASM instance, just like how you can have use-after-frees or out-of-bounds array accesses in WASM but the blast radius is confined to the instance.

Granted, a JS runtime is significantly more complex than a WASM runtime so there is more room for error.

> crash the WASM instance

I guess it depends on how you get to said crash, but no, data races on Wasm shared memory cannot "crash" anything. At worst racy reads/writes can produce garbage (primitive) values and put garbage bits into memory locations involved in the accesses. Putting garbage bits into a Wasm memory could lead to a program's logic having bugs (e.g. it could then try to access out of bounds or trap for another reason), but the accesses themselves can't crash anything.

  • leoh
  • ·
  • 2 months ago
  • ·
  • [ - ]
This is a good point, I think, in that — on account of wasm — there is really an opportunity for new languages in the browser
SharedArrayBuffer can already do data races on the web. And they can't crash the browser or computer.
It's not unsafe as in "memory segmentation fault" unsafe.

It's unsafe as in, if you don't follow the rules, the resulting value is ~rand().

For those familiar with C/C++ terminology, this is the tame "unspecified behavior" (not the nasal demon "undefined behavior.")

Nit: "unspecified behavior" isn't a thing, at least without some further qualifications. It's usually "unspecified result", or "unspecified result or trap" for certain operations. "unspecified behavior" without further qualifications is just "undefined behavior".

Having said that, an "unspecified result" can still come from anywhere, like a value left in a register from some previous computation or other "garbage" on the stack or heap. This still can be a security issue, even though the behavior is not completely undefined.

Nit nit: Unspecified behavior is absolutely a thing, reference ISO/IEC 14882:2003 §1.3.13 Unspecified Behavior.

The rest is correct.

  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
That was gone with GPU access and WebAssembly.
  • jitl
  • ·
  • 2 months ago
  • ·
  • [ - ]
The unsafe block doesn't actually do anything at all. It's just pointless cargo-cutling from Rust.
Huh, I thought that most work that used to use workers switched to Webassembly.

Talking about JS proposals, I'm looking forward to this one: https://github.com/tc39/proposal-record-tuple

Records and tuples can make a lot of logic much more easier to read, and way less fragile. Not sure how they would play together with the shared structs though.

I don't think R&T will ever ship at this point, since the browser vendors are apparently unwilling to absorb the complexity that would be required to add new primitive types with value semantics.
  • sjrd
  • ·
  • 2 months ago
  • ·
  • [ - ]
I've been following that proposal closely, and even (unsuccessfully) tried to contribute suggestions to it. I think what's killing it is that the authors of the proposal won't accept arbitrary values as fields of R/T, but all the potential users are saying that they won't use R/T if they can't put arbitrary values in them.

The reluctance of the authors is due to backward compatibility with sandboxed "secure" JavaScript (SES). That said, every other language in existence that has immutable structs and records allows to put arbitrary values in them.

So it's at a standstill, unfortunately.

If you allow arbitrary values, what's the difference between a record and a frozen object?

I thought that the whole point is to have guaranteed deep immutability, which you can't have if it's got arbitrary objects in it.

> If you allow arbitrary values, what's the difference between a record and a frozen object?

The behaviour of equality. Frozen objects are already considered to have unique identities, in that `Object.freeze({}) !== Object.freeze({})` even though both objects are otherwise indistinguishable. This behaviour can't be changed and it relates to the fact that `Object.freeze(a) === a`.

> I thought that the whole point is to have guaranteed deep immutability

Not really. The whole point apparently according to most people[0] is to have composite values that don't have unique identities, so they fit in with all the existing comparison operations (eg, `===`, `Map`, `indexOf`, `includes`) just as you can do with strings.

Immutability is a prerequisite for this, since if `a` and `b` are mutable, mutating `a` might be different to mutating `b`. Thinking again about strings, equality works because strings are immutable:

  const foo = "foo", bar = "bar";
  const a = foo + bar;
  const b = foo + bar;
  a === b; // true
Implementations will typically use different underlying memory allocations for these strings[1], but at a language level they are considered to be the same value. If it were possible to modify one of the strings (but not the other) using `a[0] = "x";` it would mean `a` and `b` are not equivalent so should not be considered equal.

As explained here[2], deep immutability is not necessary for this behaviour.

In my opinion guaranteed "deep immutability" is not generally useful/meaningful (if you have a particular use case, feel free to share it). In theory it's not possible to enforce "deep immutability" because someone can always refer to something mutable, whether that's an object reference or a number indexing a mutable array.

If you really do want something that guarantees a certain notion of "deep immutability", this concept seems somewhat orthogonal to records/tuples, since there are existing values (eg, strings and numbers) that should be considered deeply immutable, so you'd expect to have a separate predicate[3][4] for detecting this, which would be able to effectively search a given value for object references.

In case you're interested I tried to summarise the logic behind the rejection of this behaviour[5] (which I disagree with), but it's very much a TLDR so further reading of linked issues would be required to understand the points made. Interestingly, this post is on an issue raised by the odd person that actually tried to use the feature and naturally ran into this restriction.

Sorry for this massive wall of text, but I think it's hard to capture the various trains of thought concisely.

[0] https://github.com/tc39/proposal-record-tuple/issues/387#iss...

[1] https://github.com/tc39/proposal-record-tuple/issues/292#iss...

[2] https://github.com/tc39/proposal-record-tuple/issues/292#iss...

[3] https://github.com/tc39/proposal-record-tuple/issues/292#iss...

[4] https://github.com/tc39/proposal-record-tuple/issues/206 (I believe sjrd (GP) earlier independently came up with the same function name and behaviour somewhere in this thread, but GitHub seems to be failing to load it)

[5] https://github.com/tc39/proposal-record-tuple/issues/390#iss...

Thanks for the history! Reading through the issues, I agree with you that some of the motivations against objects in records seem pretty strange. Mostly they seem to be around existing JS-written 'membranes' (related to the SES stuff mentioned above?) getting confused by primitives-containing-objects, depending on which permutation of typeof checks they use. Out of curiosity, do you think that the Shadow Realms proposal they refer to will ever go anywhere?

Otherwise, there's the argument that "x.y" syntax shan't be used to access a mutable object from an immutable record, but that just feels like the all-too-common motive of "we must ensure that users write morally-correct code (given our weird idiosyncratic idea of moral correctness), or otherwise make them pay the price for their sins".

> Out of curiosity, do you think that the Shadow Realms proposal they refer to will ever go anywhere?

I haven't really been following the Shadow Realm proposal (I'm not part of TC39, so only familiar with certain proposals), but I don't think it should conflict with R/T.

If R/T values are allowed to be passed between realms, they should effectively be "transformed" such that eg, `f(#[v])` is equivalent to `f(#[f(v)])` (where `f` is the transformation that allows values to be passed between realms). For "deeply immutable" values (no object references), `f(v)` will simply return `v` (eg, `#[42]`, f(#[42])` and `f(#[f(42)])` are all the same) and a membrane should be able to trivially optimise this case.

From this comment[0] it sounds like `f({})` in the current Shadow Realm proposal will throw an error, so I'd expect that `f(#[{}])` would also throw an error.

As you were pointing out, I think the only real contention between R/T and realms is in existing JS implementations of membranes, particularly because they might use the following condition to detect if something is "deeply immutable":

  v === null || typeof v !== "object" && typeof v !== "function"
If `typeof #[{}] === "tuple"`, then their `f` function will pass that value through without handling the contained object value by throwing or by creating/finding a proxy.

If `typeof #[{}] === "object"`, it should be fine because `f(#[{}])` will either throw or create/find a proxy for the tuple. There might be some unexpected behaviour around equality of R/T values passed through the membrane, but this is pretty obscure and it should be fixed once the membrane library is updated to handle R/T values.

Personally, I'm still not 100% convinced that the assumptions made from the above condition are important enough to cause such a change to the proposal, but I don't see the value of `typeof #[]` as being a usability issue. Code that needs to check the types of things is a bit smelly to me, but in cases where you do need to check the type, `typeof v === "tuple"` and `Tuple.isTuple(v)` both seem usable to me, so just making `typeof #[] === "object"` should be fine and it solves this hypothetical issue. This is similar to array objects, which are also fundamentally special (`Object.create(Array.prototype)` is not an array object) and are detected using `Array.isArray(v)`.

> Otherwise, there's the argument that "x.y" syntax shan't be used to access a mutable object from an immutable record, but that just feels like the all-too-common motive of "we must ensure that users write morally-correct code (given our weird idiosyncratic idea of moral correctness), or otherwise make them pay the price for their sins".

Agreed, and I've pointed out[1] that even the current proposal doesn't address this, since unless you've done some defensive check on `x`, there's nothing stopping someone passing a mutable object for `x` instead of a record. If you do want to perform a dynamic[2] defensive check, perhaps you should be asking "is it deeply immutable?" or even checking its shape rather than "is it a record?".

[0] https://github.com/tc39/proposal-record-tuple/issues/390#iss...

[1] https://github.com/tc39/proposal-record-tuple/issues/292#iss...

[2] If you're using a type system like TypeScript, this check should happen statically, because you'll use a type that specifies that it's both a record and the types of the properties within it, so your type will encode whether or not it contains mutable objects

Thank you so much for the insight!
  • ·
  • 2 months ago
  • ·
  • [ - ]
I feel conflicted. Working with multithreaded stuff in JS is a huge PITA. This would go some way to making things easier. But it also feels like it would radically complicate JS. Unsafe blocks? Wow-eee.

With the rise of WASM part of me feels like we shouldn't even try to make JS better at multithreading and just use other languages better suited to the purpose. But then I'm a pessimist.

I read this and think "can't we just make freezing objects less expensive?"

Otherwise, that's all this seems like to me, a class where all instances are automatically frozen. Which is a great semantic, but they expose way too much of the internals, in this proposal, to achieve that.

Modern development is so goofy.

Puts me in mind of that meme with the beginner -> intermediate -> expert chart with something like Rust

Beginner: just clone everything

Intermediate: work out every intricacy that allows us to use multiple lifetimes

Expert: just clone everything

This proposal feels like it's in the middle.

  • andai
  • ·
  • 2 months ago
  • ·
  • [ - ]
> With the rise of WASM part of me feels like we shouldn't even try to make JS better at multithreading and just use other languages better suited to the purpose.

I think TS is a negative influence on JS, because now instead of saying "maybe we should fix the JS type system" they just say "no need to fix what's broken, people who care will just use TS anyway" (even though TS can only do so much).

On the other hand, TS mainstreamed the idea of typed JS (well, ActionScript did that decades ago, but somehow no one noticed or cared?), so it's also a positive influence?

Most people are drawn to WASM because "I can do frontend stuff without writing JS!" but for the most part that's not true, and in my experience the problems introduced by the indirection and interop, and the complexification of the mental model, and the bloating of the build system (and its fragility), were not worth it and I just switched back to TS.

So I do really wish that JS would be improved -- it remains inescapable -- especially with regard to fixing fundamental design flaws rather than just adding more shiny stuff on top.

A better title "A proposal for Shared Memory Multi-threading". The term "struct" has a meaning in the C language that is somewhat misleading since the purpose here is not organization, but rather to enable shared memory.

In my experience, the positive of JavaScript over other languages I have used- COBOL, Fortran, assembly, C, C++, Java - is the fine balance it has between expressibility and effectiveness.

I am not opposed to shared memory multi-threading, but question the cost/benefit ratio of this proposal. As many comments suggest, maintaining expressibility is a high priority and there are plenty of gotchas in JavaScript already.

As an example, I find the use of an upfront term like "async" to work quite well. If I see that term I can easily switch hats and look at code differently. Perhaps we could look at other mechanisms, using the term "shm", over a new type, but what do I know?

[edit for clarity since I think faster than I can type]

I don't understand the need for the ever-growing list of "enhancements" to JS. Take Class for example.

Class is entirely unnecessary and, essentially, tries to turn JS into a class-oriented language from its core which is object-oriented.

I never create classes. I always create factory functions which, when appropriate, can accept other objects for composition.

And I don't use prototypes, because they are unnecessary as well. Thus sparing me the inconvenience, and potential issues, of using 'this'.

In my dreams those who want to turn JS into c# or Java should just create a language they like and stop piling on to JS.

But, at least so far, the core of JS has not been ruined.

That said, there are some new features I like. Promises/async/await, Map, Set, enhancements to Array being among them. But to my way of thinking they do not change the nature of the language in any way.

  • wruza
  • ·
  • 2 months ago
  • ·
  • [ - ]
Otoh, I create classes, use prototypes and it’s natural and useful in many of my cases.

In my dreams those who want to turn JS into c# or Java should just create a language they like and stop piling on to JS.

We could even share this dream if browser vendors weren’t such whos the boss iam da boss when it comes to extensions and alternatives. So we have to live in a common denominator, which surprisingly isn’t as bad as it could be, really.

I wonder why it seems natural to you? I'm guessing JS wasn't your first language and you didn't learn the power of composition instead of classes.
  • wruza
  • ·
  • 2 months ago
  • ·
  • [ - ]
Because I think in objects (non-strictly related groups of data and methods) and it’s natural to how my business processes work. Light OOP creates neither translation nor maintenance layers to it. See https://news.ycombinator.com/item?id=41808034

I'm guessing JS wasn't your first language

Good intuition. My first language was basic, 8080 asm, x86 asm, pascal, C, perl, python, haskell (most useless), lua, objc. Js/ts is only a recent addition, so I might have missed some fashion ideas.

Tongue in cheek aside, if you’re an old dev, there’s nothing you have to listen to because you can see whether you have a problem yourself and decide for yourself. You can be your own advisor. I see both “classes” and “just functions” ways clearly and can convert my current codebases in my mind between these two. Nothing really changes for the latter, apart from bulky import sections, lots of * as ident imports, context-arg passing and few dispatch points. Objects (non-strictly related groups of data and methods) still exist and hold refs to event/callback emitters. So my reasoning isn’t why, my reasoning is why not. I have a tool, I have a business logic, pen pineapple apple pen. Don’t overthink it is my main principle.

Do I need to introduce composition? Do I have it already? How is it better than what I’m doing? Is it? What am I missing? What are they missing? What if they don’t? What if we speak of different things? These are the questions of a restless butt that cannot find rest on any stool. Instead it should ask: Do I have a problem?

:)

I started with 6800 machine language. Then c, smalltalk, scheme and etc.

Rather than spend a lot of time and botch a comparison between classes and factory functions I'll link you to an article.

He went further, introducing something he calls stamps, but I found them to be awkward the only time I tried to use them.

https://medium.com/javascript-scene/javascript-factory-funct...

  • wruza
  • ·
  • 2 months ago
  • ·
  • [ - ]
Thanks, this added to my standard low-level experiments in a new language. The most interesting (or should I say well thought-through and at the same time tricky) part of js is how prototypes and properties work. Rarely a language has a similar complexity at that level, but I started to respect it the second I’ve got an overview, cause it addresses well the pain points that simpler designs usually have.

That said, for me it’s hard to buy into his arguments, in a sense that it doesn’t matter that much, if at all. instanceof doesn’t work for different realms and is nuanced for direct prototyping and Object.create(), but I never use or care about these in my code, by design. There’s no way that such value could appear in a false-negative instanceof comparison, so. A similar thing happens in COM/OLE integrated runtimes, where you have to be careful with what quacks like a date or a string but is neither due to a wrapper. But that’s expected.

I believe the real issue here is that iframes/etc usually get served as “some values aren’t, so use X, be careful” rather than “guys it’s an effing wrapper to an effing different runtime, which we found to be an overall anti-pattern many years ago”. Browsers and webguys normalized it, well, they normalized many crazy stuff. Not my problem. There’s no need to learn to balance on two chairs when it’s not what you do when sober. I still use Array.isArray(), but only because every linter out there annoys you to hell into it.

Tldr: classes are neat you can pry them from my cold dead hands.

The only thing to care about with classes is to not fall into the inheritance trap, and not for the reasons of instanceof. Inheritance is a tree of ladders attached with a duct tape, you have to know what you’re trying to do to your design before thinking about it. Most sane use of inheritance is one-off from a library to a user (two separate developing agents agree on an implied behavior, “I implemented it for you to randomly extend and pass back” mode), or for helping type inference. Otherwise, a way to go is to eject a common behavior into a separate class or a couple of functions (aka composition).

Fair points. I do use classes exposed by libraries. I don't like it but it beats the alternative.

I really like the flexibility of factory functions, that is the main point of the article imo.

COM/OLE ... that takes me back to the early 90s, a place I hoped to never visit again!

:)

Hello JS was my first language and I use classes because sometimes it seems like the obvious way to model things.

Looking through the source of Replicache, here are some classes we use:

- KVStore

- DAGStore

- Transaction

I mean ... I can of course model these w/o classes, but encapsulating the state and methods together feels right to me. Especially when there is private state that only the methods should manipulate.

We use composition all over the place and rarely use inheritance so I don't think it's just some deficiency of knowledge .

Pre JS classes, the js community emulated classes w/ the prototype chain and that's what I'd have done for these classes if real JS classes weren't available.

closures can encapsulate state and methods classes are syntactic sugar

emulating classes is, imo, exactly the problem

using factory functions which create and return an object, with variables passed to and created in the function, handles encapsulation

and there is no `this` to deal with

  • wruza
  • ·
  • 2 months ago
  • ·
  • [ - ]
Honestly, all this “emulated with prototypes” meme is misleading. Prototypes are an implementation of what they’ve called “binding” before (some may recognize “early” and “late” in this context). That’s how classes work and what classes are. The fact that some gears got exposed to a user [to stick their fingers into] doesn’t change much.

So no, javascript didn’t really “add classes”. It just had a very annoying lower-level syntax for them from the beginning and fixed it after a while. It wouldn’t survive the pressure if it had no classes at all cause this idea is fundamental to programming and to how we think: you-do.

One may pretend to not have classes through closures, but technically that’s just classes again, cause you have a bunch of functions with a shared upvalue block. You just hold it the other way round lexically, by a method instead of a context.

I believe this common idea of alienating classes stems from the general OOP stigma since the times of “design patterns”.

Classes generally point you towards writing more performant code. Factory functions allow you to achieve the same performance, you just have to be a bit more careful not to cause a depot :)

For example,

1. field declarations [1] make sure that the fields are always initialized in the same order. That way most of your functions end up monomorphic, instead of being polymorphic [2]

2. Method declarations are also (almost) free, since you only pay for them once, during class initialization.

You also get a few other niceties such as private properties. You can emulate private properties with closures in factory functions but V8 has a hard time optimizing, unfortunately.

---

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[2]: https://www.builder.io/blog/monomorphic-javascript

The difference in performance is negligible for all but the most demanding applications. In which case I would still use factory functions, but just be more careful.
I agree. But it's one reason why someone who is concerned about performance, may prefer classes by default.
  • andai
  • ·
  • 2 months ago
  • ·
  • [ - ]
I like JS's flexibility too, but I have to point out that your object-oriented JS code is compiled into C++ classes by the v8 optimizer! (Unless you change their structure, in which case it gives up (deoptimization).)
Does it compile something like this into a c++ class?

makeThing(options, usethistoo) { let foo = options.foo; let thistoo = usethistoo;

  return {
    functions...
  }
}
I'm not sure if "C++ class" is the right term, but it will certainly compile into a class behind the scenes [1]

You can use d8 to check what the class structure ends up looking like [2]

---

[1]: https://v8.dev/docs/hidden-classes

[2]: https://v8.dev/docs/d8

>And I don't use prototypes, because they are unnecessary as well. Thus sparing me the inconvenience, and potential issues, of using 'this'.

Eh, prototypes share, instead of create, method references. I guess you can use delegate objects too though unless you're just doing pure functions.

Sure, but imo unless one is creating very many objects each with their own set of functions it's not really a significant issue.

Sometimes programmers spend way too much time optimizing code which doesn't really need it.

In my experience how data is structured is almost always the most important factor when it comes to performance.

Good data structure + simple code === performance.

Huh, no types. So every field is 8 bytes I guess?

I suppose if you want a defined/packed memory layout you can already use SharedArrayBuffer and if you want to store objects in it you can use this BufferBackedObjects library they linked. https://github.com/GoogleChromeLabs/buffer-backed-object

I also expect that in browsers this will have the same cross-origin isolation requirements as SharedArrayBuffer that make it difficult to use.

  • syg
  • ·
  • 2 months ago
  • ·
  • [ - ]
To be more precise, aligned to whatever size such that you can guarantee field writes that don't tear. Pointer-aligned is a safe bet. 4-byte aligned should be okay too on 64bit architectures if you use pointer compression like V8 does.

What kind of types did you have in mind? Machine integers and "any" (i.e., a JS primitive or object)?

And yes, in browsers this will be gated by cross-origin isolation.

If the memory layout is fixed and fields are untyped then every field must be at least 8 bytes to potentially hold a double precision floating point value. There would clearly be value in adding typing to restrict field values to 1 or 2 or 4 byte integers to allow packing those fields. But I can see that it would add complexity.
  • syg
  • ·
  • 2 months ago
  • ·
  • [ - ]
Only if your implementation holds doubles without boxing them. V8 boxes doubles, but JSC and SpiderMonkey do not.
  • voidr
  • ·
  • 2 months ago
  • ·
  • [ - ]
Most of the JavaScript developers I've encountered recently refuse to use Map, and if you dare use it, they will say that it's complicated code and premature optimisation before even making an attempt to understand it.

I feel like trying to add fast data structures into JavaScript is futile, I think at this point it would be better to make it easier for JavaScript and the browser to interface with faster languages.

The only thing I would add to JavaScript at this point is first class TypeScript support so that we can ditch the transpilers.

  • i007
  • ·
  • 2 months ago
  • ·
  • [ - ]
// Step 1: Convert JSON object to string const jsonObject = { name: "John", age: 30 }; const jsonString = JSON.stringify(jsonObject);

// Step 2: Convert the string to binary data const encoder = new TextEncoder(); const encodedJson = encoder.encode(jsonString);

// Step 3: Create a SharedArrayBuffer and a Uint8Array view const sharedArrayBuffer = new SharedArrayBuffer(encodedJson.length); const sharedArray = new Uint8Array(sharedArrayBuffer);

// Step 4: Store the encoded data in the SharedArrayBuffer sharedArray.set(encodedJson);

Now you can use Atomics, no?

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Not sure this is a good idea or not, for one it'd be awesome for doing performance oriented and threaded code in JS/runtimes, the idea seems related to how C# struct's already work (and tuples under the hood). Interop with WASM code might also be simplified if struct-like access was a built-in.

The bad is that people wouldn't necessarily be prepared for their semantics (are they value or reference based?), how to shared prototypes between environments (mentioned as problem in the proposal itself), not entirely sure if this proposal would add to the complexity vs security for spectre like attacks.

It'd be useful, but worth it is another question? (And would all major players see interest in it? esp considering that it'd need to be "JSzero" level propsal if they go in that direction. (There was a post here a few days ago about layering runtimes with JS0 being the core with everything else being syntax transforms on top).

  • pjmlp
  • ·
  • 2 months ago
  • ·
  • [ - ]
Note that the idea isn't unique to C# structs, other GC enabled languages have similar capabilities.
I thought it said "Proposal: JavaScript Sucks" and was not surprised by the number of upvotes from HN
(Wipes away tears of laughter) I needed that! [1] [2]

[1] just got some bad news [2] all in all, I love working in JS when I have to, but I’ve worked in it long enough to know of at least very many of the foot guns.

Why couldn’t you just write a paragraph instead of using some citing system for formulating your sentence?
They're footnotes and are meant to be read as supplementary notes to the core message.
But we already have a way for supplementary notes, it's by using parentheses? Like this:

  Today I went for a walk (which I don't usually do), and I saw a squirrel.
Or have I been doing it wrong?
  • syg
  • ·
  • 2 months ago
  • ·
  • [ - ]
Well I'm trying to make it suck less.
Fixed layout structs seem like a no brainer and a natural extension of the typed arrays. It’s strange that both Java and Jacascript went so long without them. Interacting with many APIs (webgpu, FFI, …) quickly becomes really unpleasant if you can’t control data layout.
My head is spinning after skimming the sections on shared memory, locks, mutexes, etc. Implementation and adoption would probably be a decade-long saga. Not to mention teaching folks when to use these and how to use them correctly.

In e.g. Elixir these are non-issues. Please, just give us declarative structs that are immutable by default (if they’re really needed, make constructors and mutability opt-in). Isn’t the trend already toward more FP in JS?

  • dvlsg
  • ·
  • 2 months ago
  • ·
  • [ - ]
There's technically a proposal to add immutable lists and records floating around somewhere. I think it's kind of old at this point. I'm still hoping it makes it through, though.
JS devs - do everything but write in another language challenge level: Impossible.
There's a lot of that, certainly, but there are legitimate reason to use JS/TS.

Frontend is an obvious one but also using services like CF Workers or Deno Deploy which are optimized for V8. You're going to get better uptime and lower latency than anything else at that cost.

Serious question, when would you want to use synchronization primitives in the frontend? I've seen the discussion revolve around service worker / web worker usage, but I think sharing ressources between a worker and the main thread has been discussed in the past and has ultimately been abandoned for security reasons.
what do you mean with "synchronization primitives"?

do you mean like reactive data?

Lots of programmers only really feel comfortable in C++ because it's the language they were trained in.
  • baxuz
  • ·
  • 2 months ago
  • ·
  • [ - ]
What a bad take. If you're writing code that runs in a browser, there's no other choice. I'm not counting things like ClojureScript, as you still need to have deep knowledge of the low-level primitives, which exclusively bind to JS.

You can use other languages that compile to WebAssembly, but it's borderline as it's basically just a VM / self-contained executable that you can pipe to. It's completely isolated from the browser.

Call me when browsers support another language. What are we going to use? CSS?
Well... sort of?

https://emscripten.org

Which compiles to...
  • jitl
  • ·
  • 2 months ago
  • ·
  • [ - ]
WebAssembly
Which is only usable on the web with...
Amazing dev experience for sure. 10/10.
wasm

"but wasm has to call JavaScript to use browser APIs" WasmGC is shipped in Chrome and Firefox and enabled by default in WebKit nightly

Let me know when WASM has a dev workflow that gets a change to your browser 1/10th as fast as Vite + TypeScript + React.
  • xpe
  • ·
  • 2 months ago
  • ·
  • [ - ]
I will grant that fast iteration is beneficial. But for me, under 10 seconds is usually fast enough. (For example, I don't think I would care too much when comparing 0.5 vs 5 second builds.)

I personally care a lot more about having a confidence-inspiring language and ecosystem.

In my experience with Rust and WASM (with various tools such as Dioxus), I find myself caring a lot more about the WASM ecosystem and browser evolution/improvement.

For example, at bottom, the JS interop feels pretty sub-optimal. Calling this "hcky" might even be deserved: I'm talking about memory serialization between JS-land and WASM-land. As I understand it, we may see significant improvement under the hood in the next few years. (I'm not an expert on the particular proposals, their adoption, etc. Please weigh in if you have a better sense.)

The reason I bring it up is because web-dev often ends up being an extremely long sequence of small tweaks. "Hmm, did that make the modal go on top of the sidebar? [save] no? [save] did that? [save] how about that? [savesavesavesavesavesave]" Iterate that process 1000 times and you have my typical workflow. This is why even a 10 second build time, which is pretty fast for most domains, is actually pretty mind-numbing in web-dev.

I'm all for having a confidence-inspiring language and ecosystem, don't get me wrong, but it's kind of a non-starter if I can't build at the same pace in Rust as I can in typical web technologies.

The web is undefeated in iteration time for sure. There are some native workflows that approach it with hot code reloading, but I'm not aware of anyone doing that for wasm yet.
  • nuz
  • ·
  • 2 months ago
  • ·
  • [ - ]
WasmGC is just garbage collection? Not browser APIs
My understanding is that GC integration was the major blocker for better wasm/browser API integration. Now that it is here I bet better integration is not far behind, although I haven't personally investigated the current proposals.
WASM has to call javascript to do really anything.
Great, show us how you render something on the page with wasm only
Oh yeah. CSS is turing complete, after all!
Got it. Writing in Typescript.
I initially didn't like the high level idea, but I warmed up to it. My only concern is that the constructor isn't guaranteed to define the same fields with the same types, which kind of defeats the point.

I'd improve this proposal in two ways:

1. Explicitly define the layout with types. It's new syntax already, you can be spicy here.

2. Define a way for structs to be directly read into and out of ArrayBuffers. Fixed layout memory and serialization go hand in hand. Obviously a lot of unanswered questions here but that's the point of the process.

The unsafe block stuff, frankly, seems like it should be part of a separate proposal.

I agree; the point of a struct type would be to allow a compact memory representation, and you're not going to get it if your constructor can do if(someArg) { a = 1; } else { a = 1; b = 2; }.

You don't strictly need known/consistent types, but it sure helps, since otherwise everything needs to be 8 bytes.

I don't think a way to read into and out of ArrayBuffers is possible, since these can have pointers in them. I think it needs a StructArray class instead, so there's a way to actually make a compact memory array out of all of this.

> You don't strictly need known/consistent types, but it sure helps, since otherwise everything needs to be 8 bytes.

Arguably that's worse than what the runtime is able to do today already with hidden classes.

> I don't think a way to read into and out of ArrayBuffers is possible

If you know all the types and only allow structs and primitives, you could use relative pointers to encode the 2nd+ references to structs that appear more than once in the encoded object. You'd need a StructArray for efficient arrays, but a linked list would encode pretty compactly. But you're very right.

When reading the proposal title, I thought that this is for interop with WASM. Having fixed-size structs where every field has a wasm-related type would be beautiful for interop. Just a wasm function can just return or receive an instance of a typed struct. No more reading the result using a DataView or something like that. We have to use something like BufferBackedObject for that.
The thing I like about this is when I get a heap dump I could get names for things instead of "object shapes", which would be cool.
Happy to see this effort.

When applying ReactJS in webdev after doing all kinds of engineering in all kinds of (mostly typed) languages in many runtimes, I was so surprised that JS did not actually had a struct/record as seen in C/Pascal. Everything is a prototype that pretends its an object, but without types and pointers, and abstraction layers that added complexity to gain backwards compatibility.

Not even some object hack that many OO and compiled languages had. ES did not add it either, and my hopes where in WebAsm.

This proposal however seems like the actual plan that i’d like to use a lot.

A lot of the code complexity was to get simple guarantees for data quality. The alternative was to not care, either a feature or caveat of the used prototype model.

  • baxuz
  • ·
  • 2 months ago
  • ·
  • [ - ]
This looks like it's going to be a great fit for emscripten, especially multithreaded.
@syg If you happen around to answer more questions: Why going with only Sealed prototypes for structs? Personally, I would assume that with static initializer blocks we could well go with "initially Sealed" with "Frozen once class initialization completes". ie. Make the last step of "StructDefinitionEvaluation" AO "SetIntegrityLevel(F, FROZEN)".

This way I'd assume eg. decorators would be usable on struct fields and methods, but engines would be safe to cache prototype method lookup result values without any validity cell mechanics. I would assume this could make prototype method calls on structs very fast indeed.

  • andai
  • ·
  • 2 months ago
  • ·
  • [ - ]
A stricter, faster subset of JS would be very welcome, which seems to be what the unshared struct part of this proposal provides.

By the way, doesn't V8's optimizer already do something like this internally? I read one of their tech blogs back in the day that explained how they analyze the structure of objects and whenever possible, compile it to the equivalent of a C++ class.

I guess doing it explicitly makes the optimizer's job much easier -- the more guarantees you give it about what won't happen, the more optimizations it's free to make.

Not JS, but quite the same niche is where AssemblyScript sits: https://www.assemblyscript.org/
I thought we were done shoehorning all possible CS concepts into javascript?
  • ffsm8
  • ·
  • 2 months ago
  • ·
  • [ - ]
There is a very specific goal the authors want to achieve, and it's not what you seem to think from reading the title of the link.
Except for the pipeline operator
Can see "why", but I can't really see why a new syntax is warranted. This feature is expected to be used infrequently and probably has to be defined as an ECMAScript extension only in order to put it into WebAssembly. A "fake" prototype that indicates strictness should be enough for implementations (and polyfills). There are many other issues but that is glaring enough to be pointed out.
The similarity with CL's divide between structs and classes is uncanny; especially with ((:type vector) :named).
What a mess this language is becoming.
I think that there is a spilling over of financially incentivized "innovation" stemming from the companies that are involved in the browser \ web space.

If you are fairly senior or aiming for some sort of promotion this is the sort of thing that looks great on your resume.

I doubt that it is driven by a desire to help consuming devs build better quality products more quickly or easily.

I love JavaScript! It’s such an exciting development experience loaded with surprises! I was just thinking the other day how cool it would be if it had unsafe blocks like in Rust. What an exciting time to be alive!
Maybe I'm missing something, but this seems to add very little. Please correct me if I am missing something.

1) Struts are encouraging a coding style that restricts what you can do. This inflexibility is then negated by adding unsafe blocks?

2) Struts don't, as far as I can see, address any of the _actual_ weaknesses of js classes- such as not being able to create aysnc constructors.

3) The cited performance benefits seem a bit strange. JS has no access to pointers or memory by design, so I don't understand why struts will automatically make things faster. Surely it makes more sense to refine the v8 engine, or even focus on WASM rather than adding syntactic sugar to vanilla js.

That said- props to people who care enough to write a proposal- and if I am missing the point of struts, sorry for the negativity.

I'm kind of with you on this. I probably didn't read enough details but it sounds like a library that abstracted over shared types arrays would already do all of this or at least solve the same problem.

I'd rather see binary struct views added to typed arrays. Ideally with a settable offset so you don't have to create a new view for every instance. That seems more useful than this middle ground that can already be poly-filled. I guess binary structs can also be poly-filled but it feels like a far more obvious speed win. Marshalling data in/out of WASM, in out of WebGPU/WebGL, parsing binary files, and sharing data across shared memory all get solved at once and with speed.

1. Unsafe blocks only apply to shared structs, and shared structs are basically only relevant with unsafe blocks. This is somewhat two proposals in one: Shared structs which require more restrictions to be shareable at all but are still kind of unsafe (because data races), and then unshared structs which adopt those same restrictions without being shareable. In exchange, they become much easier to optimise for engines. Unsafe blocks are there for the first part, unshared structs kind of come as a two-for-one prize on top.

2. Indeed, structs are rather an entirely different track to classes. Only the syntax is borrowed from them.

3. There's a bunch of stuff that the engine will do for you to try make your code faster. The most important thing (arguably) is inline caching: When you access `foo.bar` inside a function, your engine will remember the "shape" of the `foo` object (if it is an object, that is) and where the property `bar` was found inside of it. Unfortunately, objects tend to be pretty fluid things, so the shape of an object changes. This creates a "transition" graph of shapes, and it's pretty hairy stuff. It's also a source of memory safety bugs in browsers, as browsers want to avoid re-checking the shape of an object if it cannot have changed but this is mostly a manual optimisation, and eg. Proxies really make it so nearly everything can change an object's shape. A misapplied shape caching optimisation is easy to turn into an arbitrary read/write primitive, which is then a great way to escape the sandbox.

Imagine then that an object type existed that could be primitively guaranteed to never change it's shape? Oh, the engine would loooove that. No worries about memory safety mistakes, just cache the shape when you first see it and off to the races you go!

This applies doubly to any prototypes (which here are proposed to be only sealed; I'd personally want to see them frozen so that not only the shape can be cached but also the value): An object's shape may stay the same but the prototype may change with key deletions and additions. This means that looking up that function to call for `obj.hasOwnProperty("key")` needs to, theoretically, be redone every time. Engines of course optimise this into a fairly complex linked list of booleans, but by golly wouldn't it be easier if the engine could just statically cache that the property we're looking for is found in this particular prototype object at a particular memory offset?

Source: I lurk around in some adjacent circles, and am writing my own JavaScript engine built with potentially peculiar ideas about what makes good JavaScript.

Structs
I mean.. After ES6 with classes what is JavaScript anyway? Just bring Structs too, the more the merrier.
I know, right?

Fortunately we can still aren't forced to use all the 'enhancements'.

I just want structs (in regards to impact on the garbage collector).
Is there a way to "vote" on these types of proposals? (Just asking for a friend who sees this as bloat and does not want to deal with other people's code which uses this unnecessarily)
what
personally I'm super excited to see this -- have been wanting something along these lines for quite some time.
  • leoh
  • ·
  • 2 months ago
  • ·
  • [ - ]
Needs a re-entrant mutex?
The scope of this proposal is too large. If it comes down to preference, I'm not a rust rust fanboy but I think they got the struct/impl paradigm right.
Nahhh
Leave. The. Language. Alone.
[dead]
  • khana
  • ·
  • 2 months ago
  • ·
  • [ - ]
[dead]
[flagged]
Please don't do this. These summaries literally add nothing to the discussion.
They add at most nothing ;)
Far too long to be a TLDR. Especially since it’s AI slop.

I’d either want one or two sentences or just read the first party source.

  • ·
  • 2 months ago
  • ·
  • [ - ]
Please stop. What a nonsense. JS is a dynamic language where everything is a Hashtable. It will never be really fast as your structs won’t be in single cacheline, you won’t be able to calculate field address during compile time by pointer offsets. There’s no simd, no multithreading, no real arrays.

JS is such a simple, dynamic language. It should just stay this way. Please stop bloating it with every feature that’s trendy this year. We already have classes that we didn’t need. We don’t need structs for sure.

High performance applications are already being written that depend on features like shared memory, but because the language has poor support for them then developers have to use ugly workarounds. This proposal solves that with built-in support.

>It should just stay this way

Counterpoint: JS has been evolving significantly, look at ES6 and ES8 in particular if you need help finding examples.

  • rty32
  • ·
  • 2 months ago
  • ·
  • [ - ]
Exactly. Without new features and syntaxes people would still be doing MyClass.prototype.method = function () { } like idiots. Such a meaningless argument for preventing progress.
Nobody needs classes nor prototypes in JS. Objects + functions is more than enough. I stopped using these few years ago and miss nothing.
  • jitl
  • ·
  • 2 months ago
  • ·
  • [ - ]
You and "anybody" are actually two different sets of people with different needs and desires
  • wruza
  • ·
  • 2 months ago
  • ·
  • [ - ]
It is enough, but not more than. Collecting functions in a group by operating on a shared context is naturally useful and convenient. Pretending otherwise leads to all sorts of “a method, but I see it as a function with an accidental first parameter in a homonimous namespace because having a function name prefix is ugly, and it’s all ugly, but at least it’s not a class”.

  import * as fooNs from './foo'
  fooNs.barBazQuuxFoo(foo, …)
vs

  foo.barBazQuux(…)
  • rty32
  • ·
  • 2 months ago
  • ·
  • [ - ]
IDE support is often hit or miss with these, I have seen too much of it.

Don't use vscode or never bother to write jsdoc/do any strict typing? Never mind. Good luck with your codebase.

I just mean you won’t write a video codec or a 3d renderer in JS. It will never get there. Just leave these things to WebAssembly where needed and leave JS as a slow, dynamic language we use for web apps.
3D renderers have existed in JS for ages so that seems more like a failure of imagination on your part.

The nice thing about fixed layout structs is that it leans in to optimizations people are already doing based on the behavior of JS engines where 'shapes' of objects are important and properties can be looked up by offset if you keep your code monomorphic. It can be a bit of a headache to enforce this and you can accidentally fall off a performance cliff if you end up with many 'shapes' for the same thing. By making this a language feature it codifies and blesses what was essentially a hack relying on the implementation of the underlying engine that could change at any time.

There are also TypedArray's that do provide a bunch of cache friendly (but slightly unergonomic) ways to organize data.

A good resource for the sorts of things people are doing to write high-performance JS is here: https://romgrk.com/posts/optimizing-javascript

It's a false dichotomy. Computers are fast. You should be able to write fast computer programs in any language.

The limiting factor on a program's performance should be the design of algorithms and data structures, not the programmer's choice of language or runtime.

> I just mean you won’t write a video codec or a 3d renderer in JS.

Not with the attitude, you don’t.

> Just leave these things to WebAssembly where needed and leave JS as a slow, dynamic language we use for web apps.

The ship has sailed when they made V8 and performance race has started.

  • wruza
  • ·
  • 2 months ago
  • ·
  • [ - ]
Javascript implementations do not use hash tables for objects.

Yes, it is surprising. https://stackoverflow.com/questions/6586670/how-does-javascr...

And when jit kicks in, it does all the usual calculate-the-offset things in generated code.

> Please stop bloating it with every feature that’s trendy this year.

Trendy structs. Did I return to 1980? (wipes happy tear)

  • ·
  • 2 months ago
  • ·
  • [ - ]
They should just go write golang, if this is what they want.
  • weego
  • ·
  • 2 months ago
  • ·
  • [ - ]
It's hard to take anyone concerned about the 'performance ceilings' in javascript object creation seriously at this point.

    Give developers an alternative to classes that favors a higher performance ceiling and statically analyzability over flexbility.
Is an entirely reasonable goal. Object shape in JS tends to go through a fixed pattern of mutation immediately after construction and although that can sometimes be analysed away by the JIT there are a lot of edge cases that can make that tricky.

You may not care, but I bet almost everybody who has actually worked on a JS engine does, and has good reasons for doing so.

why?
JS bad. /s