I love the idea of a "thought experiment language" - actually creating a working language is a big overhead, and its really fun to think about what an ideal language might look like.

The crazy thing with reading this and the comments, is that it seems like we all have been daydreaming about completely different versions of a "high level rust" and what that would look like. For me I'd just want a dynamic run time + simpler types (like "number" or a single string type), but it looks like other people have a completely different list.

Some of the additions here, like a gradual type system, I would really not want in a language. I love gradual type system for stuff like Python, Typescript and Elixir, but those are cases where there's already so much untyped code written. I would way prefer the guarantees of a fully static typed codebase from day one when that's an option.

In college, my programming languages class used a language called "Mystery" (I believe created by my professor), which was configurable. Assignments would be like "write some test programs to figure out whether the language is configured to use pass-by-value or pass-by-reference". And there were a bunch of other knobs that could be turned, and in each case, the idea was that we could figure out the knob's setting by writing programs and seeing what they did.

I loved this, both as a teaching aid, and as an eye-opener that programming languages are just an accumulation of choices with different trade-offs that can all go different ways and result in something that works, perhaps a bit better or perhaps worse, or perhaps just a bit more toward or away from one's own personal taste.

This is sort of the lisp idea of "create the language that is natural to write the application in, then write the application". Or Ruby's take on that idea, with more syntax than lisp but flexible and "human" enough to be DSL-ish.

But somewhat to my sadness, as I've progressed in my career, I've realized that the flip side of this is that, if you're building something big it will require lots of people and all those people will have different experiences and personal preferences, so just picking one standard thing and one standard set of defaults and sticking with that is the way to go. It reduces cognitive overhead and debate and widens the pool of people who can contribute to your effort.

But for personal projects, I still love this idea of thought experimentation around the different ways languages and programming environments could work!

Don't be so quick to discount DSLs. Sure, you don't want a half-baked DSL when some simple imperative code would do. But if you watch your API evolve into an algebra and then don't formalize it with a DSL you might be leaving powerful tools for understanding on the table.

A poor-fitting language is terrible for abstract thinking, on the other hand an internally-consistent and domain appropriate language can unlock new ways of looking at problems.

I'd highly recommend Martin Fowler's work on DSLs to see how you can apply these techiques to large projects.

Yes, but then you need to be able to market your DSL and get buy-in. Otherwise you will forever be just a team of one. And then need to sell to all the stakeholders of the project the idea of trusting one person for all the development.

So in addition to the skill of creating a DSL, you need the skills of thoroughly documenting it, training other people to use it, creating tools for it, and explaining the benefits in a way that gets them more excited than just using an existing Boring Old Programming Language.

Which is certainly possible. You can get non developers excited if they can use it for answering their own questions or creating their own business rules, for example. But it's a distinct skill set from cranking out code to solve problems. It requires a strong understanding of the UX (or DX) implications of this new language.

I’m of the mindset that API and DSL are more of a continuum than categories. As soon as you write your first abstraction, you’re making a little language.

In the same way, what you listed isn’t a distinct skill set from cranking out code to solve problems. What happens is those skills are now levered. Not the good vibes “leveraged”. I mean in the “impact to success and failure is 100x baseline” sense. If those skills are in the red, you get wiped out.

Every word you wrote is true. It's all still true if you replace "DSL" with any project and "Boring Old Language" with the competitor.

This is the stopping at 90% problem somebody just posted a link to in another thread. edit: https://austinhenley.com/blog/90percent.html

No, for Boring Old Language other people have already solved those problems for you.
Regardless of how supposedly non-controversial each technical design decision is, you will find that the entire scope of them never is.

So you will always be working to bring people along to your design choices and help them understand the (relative) value, or risk forever languishing as the sole contributor.

You don't get buy-in with technology, you get buy-in with ideas.

Given that the GitHub repo is almost three years old, I expect Martin Fowler to already have Dada Patterns, Refactoring in Dada, Dada Distilled, Dada DSL and Dada Best Practices ready to publish.
A related notion is that you need strong, well-thought out, and when the system is changing, regularly refactored abstractions. You might not need a DSL but your class/type/trait designs needs to be sane, your API needs to be solid, etc ... DDD principles are key here.
Yes, eat your vegetables!

A question of philosophy: If you have all that, don't you already have a DSL, using a deep embedding in the host language?

  • seanc
  • ·
  • 1 month ago
  • ·
  • [ - ]
I certainly think so. Or at least I find it very helpful to think about interface design that way. It's DLS's all the way down.
Yes, but the language in which you create your framework can do a lot of the heavy lifting. For example, if your main interface is a REST API, there is a large body of knowledge of best practices, educational resources, and existing tools for interacting with it.

With a new DSL, you need to create all of that yourself.

The point I (and it seems several others here) are trying to make is that your API already is a DSL, the question is just whether it's a good one or a bad one.

A good one is internally consistent so that users have predictability when writing and reading usage. A good one uses the minimum number of distinct elements required for the problem domain. A good one lets the user focus on what they're trying to do and not how they need to do it.

The principles apply regardless of interface. Physical device, software UI, API, DSL, argument over HN, it's all a continuum.

amen to this … i recommend thinking about your problem in terms of effective data structures and then apply even a very simple DSL to handle access and transformations … fwiw the built in Grammars and Slang support in raku https://docs.raku.org are fantastic tools for this job.
The problem a lot of people have with DSLs is… well, just look at a prime example: SAS.

If you’re an experienced programmer coming in to SAS, your vocabulary for the next LONG time is going to consist primarily of “What The Fuck is this shit?!?”

I hated SAS with a passion when I was forced to work with it for 2 years. One of the biggest problems I faced was, it would take me a long time to find out if something was doable or almost impossible in that language.

It wanted to be more than just SQL, but the interoperability with other languages was awful, we couldnt even work with it like SQLite.

What do you mean? Computations very naturally organize into batches of 40 cards each.
Was your professor Amer Diwan? His Principles of Programming Languages course was amazing.

This is one of his papers in Pl-Detective and Mystery for anyone interested: https://www.researchgate.net/publication/220094473_PL-Detect...

Yep :) I thought there would probably be some folks here who would recognize this.
Nope, but if you click into their paper[0] and follow the link to PL-Detective[1] there, that's the one! (Hat tip to another commenter who was also familiar with this.)

0: https://cs.brown.edu/~sk/Publications/Papers/Published/pkf-t...

1: https://dl.acm.org/doi/10.1145/1086339.1086340

  • thesz
  • ·
  • 1 month ago
  • ·
  • [ - ]
> actually creating a working language is a big overhead

Languages, with first class values, pattern matching, rich types, type inference and even fancy RTS, often can be embedded in Haskell.

For one example, it is very much possible to embed into Haskell a Rust-like language, even with borrow checking (which is type-checking time environment handling, much like linear logic). See [1], [2] and [3].

  [1] http://blog.sigfpe.com/2009/02/beyond-monads.html
  [2] https://www.cs.tufts.edu/comp/150FP/archive/oleg-kiselyov/overlooked-objects.pdf
  [3] http://functorial.com/Embedding-a-Full-Linear-Lambda-Calculus-in-Haskell/linearlam.pdf
Work in [3] can be expressed using results from [1] and [2], I cited [3] as an example of what proper type system can do.

These results were available even before the work on Rust began. But, instead of embedding Rust-DSL into Haskell, authors of Rust preferred to implement Rust in OCaml.

They do the same again.

Are you suggesting that creating a new programming language from scratch is a trivial exercise? If yes, wow. If no, I think the intention of your comment could be more clear, particularly regarding the quote you took from the original comment.
I suspect the GP was merely suggesting a less-costly alternative. Perhaps building a complete standalone compiler or interpreter is hard, but we're all designing APIs in our programming languange of choice day in and day out.

Both strategies are very hard, but one of then is "build a prototype in a weekend" hard and one of them is "build a prototype is a month" hard.

It is interesting to consider how much the lower abstraction influences the higher abstraction. If you are building on a existing language/runtime/framework then you can inherit more functionality and move faster, but also you implicitly will inherent many of the design decisions and tradeoffs.
totally. for me the interplay between the host language and the target language is hardest thing for me to manage when bringing up a new environment. it really doesn't seem like it should be a big deal, but it comes down to the sad reality that we operate by rote alot of the time, and completely switching semantic modes when going between one world and the other is confusing and imposes a real cost.

I'm still not that good at it, but my best strategy to date is to try to work in a restricted environment of both the host and the target that are nearly the same.

  • thesz
  • ·
  • 1 month ago
  • ·
  • [ - ]
You can have target language to be as far fom host language as you like.

For one example, again, borrowed fom Haskell universe, is Atom [1]. It is a embedded language to design control programs for hard real-time systems, something that is as far from Haskell area of application as... I don't know, Sun and Pluto?

[1] https://hackage.haskell.org/package/atom-1.0.13

I'm sorry - of course you can. my problem is that switching back and forth I internally get them confused. and I start moving the target language to be closer to the host. while this might be my failing alone, I've seen this happen quite a bit in other language design projects.
Right - in theory any turning-complete-language can bootstrap any other turning-complete-language. But in practice some of the convections and approaches are going to be mixed via some combinations of cognitive load, laziness/inertia, ease of development, etc ...
Very good point. See, for instance Kiselyov's embedding of his tagless final form: https://www.okmij.org/ftp/tagless-final/course/lecture.pdf
Creating a "new" programming language isn't that difficult - creating something that is interesting, elegant and/or powerful requires a lot of thought and that is difficult.
For me, creating a new programming language which is suitable for general purpose programming would be extremely hard, regardless of how novel or good it is. But, fair point that "hard" is always subjective.
I think I was being over literal about the "language" part - it's certainly possible to create a very simple language in a short time but actual productivity would require a lot of libraries and tooling and I suspect that is much more work than the core language.
  • thesz
  • ·
  • 1 month ago
  • ·
  • [ - ]
Quite the contrary.

You need to use existing facilities (type checking, pattern matching combinators, etc) of a good implementation language as much as possible before even going to touch yacc or something like that.

  • pas
  • ·
  • 1 month ago
  • ·
  • [ - ]
> instead of embedding Rust-DSL into Haskell, authors of Rust preferred to implement Rust in OCaml

why? and how much does it matter, if the goal is to have a compiler/interpreter? (as I assume is the case with Dada, and was with Rust)

R&D. Bootstrapping.
  • pas
  • ·
  • 1 month ago
  • ·
  • [ - ]
Runtime would be nice, but ... that's basically what Tokio and the other async frameworks are. What's needed is better/more runtime(s), better support for eliding stuff based on the runtime, etc.

It seems very hard to pick a good 'number' (JS's is actually a double-precision 64-bit IEEE 754 float, which almost never feels right).

Yes, that's true - "number" is probably more broad than I'd really want. That said, python's "int", "float" and "decimal" options (although decimal isn't really first class in the same way the otherse are) feels like a nice balance. But again, its interesting the way even that is probably a bias towards the type of problems I work with vs other people who want more specification.
The key though is probably to have a strong Number interface, where the overhead of it being an object is complied away, so you can easily switch out different implementations, optimize to a more concrete time at AOT/JIT time and have clear semantics for conversion when different parts of the system want different concrete numeric types. You can then have any sort of default you want, such as an arbitrary precision library, or decimal or whatever, but easily change the declaration and get all the benefits, without needing to modify downstream code that respects the interface and doesn't rely on a more specific type (which would be enforced by the type system and thus not silent if incompatible).
That sort of stuff is easy to do with Truffle (which, ironically, lets you define a language using what they call the "truffle dsl").

The SimpleLanguage tutorial language has a bigint style number scheme with efficient optimization:

https://github.com/graalvm/simplelanguage/blob/master/langua...

"Number" implies at least the reals, which aren't computable so that's right out. Hans Boehm's "Towards an API for the Real Numbers" is interesting and I've been gradually implementing it in Rust, obviously (as I said, they aren't computable) this can't actually address the reals, but it can make a bunch of numbers humans think about far beyond the machine integers, so that's sometimes useful.

Python at least has the big num integers, but its "float" is just Rust's f64, the 64-bit machine integers again but wearing a funny hat, not even a decimal big num, and decimal isn't much better.

I would argue that what "number" implies depends on who you are. To a mathematician it might imply "real" (but then why not complex? etc), but to most of us a number is that thing that you write down with digits - and for the vast majority of practical use cases in modern programming that's a perfectly reasonable definition. So, basically, rational numbers.

The bigger problem is precision. The right thing there, IMO, is to default to infinite (like Python does for ints but not floats), with the ability to constrain as needed. It is also obviously useful to be able to constrain the denominator to something like 10.

The internal representation really shouldn't matter that much in most actual applications. Let game devs and people who write ML code worry about 32-bit ints and 64-bit floats.

> (but then why not complex? etc)

Note parent said "at least the reals"

  • iopq
  • ·
  • 1 month ago
  • ·
  • [ - ]
I don't know why you think it implies reals. Most people would assume BigDecimal
  • pas
  • ·
  • 1 month ago
  • ·
  • [ - ]
Probably most people want accurate fractions (1/3), so they likely want Rationals. Of course machine-adjacent minds probably immediately want to optimize it to something faster.
  • iopq
  • ·
  • 1 month ago
  • ·
  • [ - ]
I've never needed accurate fractions, except for one case where I should have stored the width and the height of the image as the original figures instead of trying to represent it as a fraction. But that's not a big issue, there's literally no downside to having width and height of the image as integers.

I see no reason why I would need to represent it as an accurate fraction instead of two numbers, even if I divide it later I can always just do that inaccurately since the exact aspect ratio doesn't matter for resizing images (<1% error won't affect the final result)

"God made the integers, and all the rest is the work of man."

- Leopold Kronecker

For what it's worth, Moonbit (https://www.moonbitlang.com/) is a really nice take on this. Designed by the guys who created Rescript for OCAML, but for WASM-first world.
  • Diris
  • ·
  • 1 month ago
  • ·
  • [ - ]
Wow, that's nice. I didn't know about that one. Thank you for posting!
Have you played at all with Gleam?

https://gleam.run/cheatsheets/gleam-for-rust-users/

This was my first thought too. I've not used it (just clicked through the tutorial) but it has a strong flavor like "Rust, but pleasant".
The gradual type systems you're referring to are bolted onto the languages, or in Elixir's case, the runtime. If you want to see what a language with a deeply integrated gradual type system is like, take a look at Julia. I've found it to be both expressive and precise.
The challenge of thought experiments, in a statically typed language, is ensuring soundness. The first version of Java Generics, for example, was unsound: https://hackernoon.com/java-is-unsound-28c84cb2b3f
I agree, fantasy and play is needed. Since we humans have a brain area for play and imagination, why not explore.
Isn't Rust a "high level rust"?
Their Hello, Dada! example:

print("...").await

I'm coming from Python, and I can't help but ask: If my goal as a programmer is to simply print to the console, why should I care about the await? This already starts with a non zero complexity and some cognitive load, like the `public static void main` from Java.

> If my goal as a programmer is to simply print to the console, why should I add care about the await?

Because that isn't ever anyone's actual goal? Optimizing a language design for "Hello World" doesn't seem like a particularly useful decision.

It’s not an end goal, maybe, but if I’m writing a complex program and I want to print to the console for logging or debugging or status, I shouldn’t have to think about the design of that print-call. I would like to be able to focus on the main complexity of the program, rather than worry about boiler-plate complexity every time I want to print.
You seem to be making the assumption that in other languages calling print is a blocking function that guarantees the printing of a string. Which it isn’t.

In python print adds your string to the stdout buffer, which eventually gets written out to the console. But it not guaranteed, if you want that guarantee you need to call flush on the stdout IO handler.

Dada has taken the approach of making blocking IO operations explicit, rather than purely implicit. The result is that if you want to perform an IO operation, you need to explicitly say when you want to block, rather than allowing an elaborate stack of runtime buffers dictate what happens immediately, what happens later, and what going to block further code execution.

In short this completely exists in other languages like Python, you’ve simply not be aware of it, or aware of the nuanced was in which it fails. But if your someone whose wasted hours wrestling with Python IO system, then you’ll appreciate the explicit nature of Dada’s IO system.

Would waiting on a mutex or signaling a semaphore require explicit awaiting in Dada? What about faulting in an mmaped memory buffer?
I honestly don’t understand why you seem to be getting so upset about this. Dada isn’t a real language, it’s a thought experiment. It’s whole purpose to ask exactly these questions, and discuss the consequences, so those learnings can be used to inform other languages.

Arguing that a particular design choice is silly from a purely ergonomic or usage perspective is kind of absurd, given you literally can’t use the language at all. Maybe waiting for a mutex, signalling a semaphore, or waiting for page faults should require an await (although it’s literally impossible for a language to await a page fault without a lot cooperation from the OS). The whole point of Dada is you can make those design choices, then work through the consequences. Maybe it turns out they’re actually fantastic ideas, once you get past the surface level issues, or maybe they’re terrible ideas. But once again, Dada doesn’t actually exist! It’s a thought experiment to test all kinds of ideas, but without having to waste all the time and energy building those ideas to discover issues that could have been discovered by simply having a conversation.

Sorry for appearing that way, I'm genuinely not getting upset. I'm just passionate about this relatively minor corner of language design.

Exactly because Dada is just a thought experiment it interesting to push the boundaries of such a model in various ways with low stakes.

Constructively, I'm partial to full coroutine abstractions that hide the asynchronouness of functions or on the other side of the spectrum, to full effect systems.

I think async is a necessary evil on some high performance languages (like rust, C++, certainly not python), but elevating it to an actually desirable from the ergonomic point of view seems just wrong.

I think there is value in elevating it from an ergonomic point of view, especially when walking that boundary between concurrency and parallelism.

Pure concurrency has the advantage that you can do away with a lot of complex and nuanced synchronism mechanisms, by virtue of the fact you’re not actually sharing memory between parallel lines of computation. Something that makes writing correct concurrent code quite a bit easier and friendlier. In that world have clear and explicit markers of when a function call might result in you yielding to the event loop, and thus memory values you read previously in your function might change, is very handy. Especially if there’s a nice mechanism to delay that yield until after you’ve completed all your important memory operations, and have confidence that your computed values are consistent.

Coroutines are certainly a different approach to the same problem, so hide the blocking nature of functions in a neat way, but at the cost of requiring you to start using those complex synchronisation primitives, because any function call or operation might result in an implicit yield, and thus you can’t predict when memory values might change.

My first introduction to async/await was the Twisted framework for Python. It wasn’t called async/await back then, the principles were identical. Twisted made it possible to write pretty high performance concurrent network code in python, in a way that was very understandable, and _safe_, without resorting to multi-threading or multi-processing. As a result I think the async/await in Python is actually a really good idea. When used correctly, it makes it possible to write really nice, performant, code in python, without resorting to parallelism, and all the pitfalls that come with that (I.e. synchronisation). Async/await provides a nice middle ground between full on parallelism, and single threaded blocking code with no ability to interleave IO operations.

> concurrency has the advantage that you can do away with a lot of complex and nuanced synchronism mechanisms, by virtue of the fact you’re not actually sharing memory between parallel lines of computation.

That's the bit I reject. In practice you are saying that there are no reentrancy concerns between preemption points (async calls), and marking those points explicitly in code help avoid bugs.

I claim that:

a) there can be are reentrancy issues even in regions only performing sync calls (due to invoking callbacks or recursively reentering the even loop)

b) if we value explicit markers for reentrancy, we should be instead explicitly marking reentrancy-unsafe regions with atomic blocks instead of relying on the implicit indirect protection of within-async regions.

With async you still have to think about synchronization, but instead of being explicit and self-documenting in code with synchronized objects and critical sections, you have to rely on the implicit synchronization properties. And if you write code that rely on it, how do you protect against that code breaking when someone (including ourselves) shuffles some async calls around in three months?

In fact something like rust, thanks to lifetimes and mutable-xor-shared, has much better tools to prevent accidental mutation.

Don't get me started on python, asyncio is terrible in its own unique ways (the structured concurrency in trio and similar seems much saner, but I have yet to use).

[Sorry for continuing this argument, as you can tell I have quite strong opinions on it; feel free to ignore me if you are not interested]

> there can be are reentrancy issues even in regions only performing sync calls (due to invoking callbacks or recursively reentering the even loop)

You would hope if this was done properly you wouldn’t be using callbacks at all, because that’s kinda throwing away any benefits async/await provides, and reentrancy to the event loop should require explicit markings.

> if we value explicit markers for reentrancy, we should be instead explicitly marking reentrancy-unsafe regions with atomic blocks instead of relying on the implicit indirect protection of within-async regions.

In principle yeah kinda agree, but a lot of code isn’t reentrancy-safe, I would argue that most code isn’t reentrancy-safe, unless carefully designed to be reentrancy-safe. So a programming model that implicitly makes most code protected against reentrancy does provide value, and make it harder for difficult to debug concurrency bugs to slip in.

I don’t necessarily think it’s the “best” approach, I much prefer people actually think about their code carefully, and be explicit with their intentions. But that requires quite a lot a of experience, and understanding the detailed nuances that come with parallelism, something many engineers simply don’t have. So I think there’s a lot of value in programming paradigms that provide additional protection against those types of errors, but without forcing the type of rigour that Rust does, due to the learning barrier it creates.

I suspect that async/await is here to stay for now, but I very much see it as part of a continuum of concurrency paradigms that we’ll eventually move past, once we find better ways of writing safe concurrent code. But I suspect we’ll only really discover those better ways once we fully explored what async/await offers, and completely understand the tradeoffs it forces.

It's trivial to tell Python to block on all stdout writes, though. You don't have to do it on every call.
  • pas
  • ·
  • 1 month ago
  • ·
  • [ - ]
sure, but it seems useful to be able to opt-in/opt-out of async easily. ie.

if I want a short/simple program it would be cool to put a stanza on the top of the file to auto-await all futures.

I'm not a big fan of async/await in general, I just didn't think this specific complaint was particularly compelling.
  • baq
  • ·
  • 1 month ago
  • ·
  • [ - ]
I'd say you want something like 'debug_msg()' for this.

'print()' should be async because it does IO. In the real world most likely you'd see the output once you yield.

“The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.” — Brian Kernighan, co-creator of Unix
In my experience the best debugging tool in Python is judiciously placed asserts, breakpoints and PDB. Not print.
Huh, typically print is the debug message function vs explicitly writing to stdout
I don’t think so.

Normally print isn’t a debug message function, people just use it like that. (it normally works on non debug builds)

Production builds should retain all debug prints, only hide them behind a flag. This helps you preserve sanity when troubleshooting something.
Printing directly to the console, even in a console app, is for debug purposes only.

If your console app is writing output to any device, it must, for instance, handle errors gracefully.

That means, at least in Rust, write! rather than print!.

What makes you say that? I almost always use println! over write!.

From the docs: "Use println! only for the primary output of your program. Use eprintln! instead to print error and progress messages."

What makes me say that a well-built program properly handles errors?
Panic is a perfectly proper way for a well-built program to stop execution.

There is no point in juggling around Result types if a failure means that you can not recover/continue execution. That is in fact exactly what panic! is intended for [1].

[1]: https://doc.rust-lang.org/book/ch09-03-to-panic-or-not-to-pa...

A failure to write to stdout should not be unexpected given that stdout is routinely redirected to files or to pipes, both of which can be suddenly closed or otherwise fail from the other direction. Yes, you can't recover in this case, but you should at least properly report the error to stderr before exiting, in a way that lets the end user (rather than app dev) properly diagnose the error.

Now if you fail to write to stderr, yeah, that's a good reason for a console app to panic. The onus is on the user to provide something that is "good enough" in that case.

IMO the real problem is that print() etc defaults to stdout historically, but is used mostly for diagnostic information rather than actual output in practice, so it should really go to stderr instead. This would also take care of various issues with buffering etc.

Woah woah woah let's not get hasty. We can have panicking and nonpanicking versions of the API (at least until somebody builds the nonpanicking version of Rust, that will be great). The panicking version is for quick, off-the-cuff usage, and the nonpanicking one for production use.

There's value in the Hello, World and println-debugging style print, even if it should be eschewed in most general contexts.

I didn't say that there isn't value in "hello world" or println-debugging style print. The point is that both should go to stderr rather than stdout (and then panic if they fail). But for stdout, which is the channel for output and not for logging, the default should be to require error handling.

Consider something as trivial as `cat foo >readonly_file` to see why.

Panic is perfectly fine in certain cases, but it's absolutely not a general error-handling mechanism for Good Programs (TM). (Some contexts excluded, horses for courses and all that)

You can and should recover from bog standard IO failures in production code, and in any case you'd better not be panicking in library code without making it really clear that it's justified in the docs.

If your app crashes in flames on predictable issues it's not a good sign that it handles the unpredictable ones very well.

In rust, the common debug message function would be log::info! or log::debug!, with two lines of setup to make logs print to stderr. Or for something more ad-hock, there's dbg! (which adds context what you are printing, and doesn't care about your logging config). Not that people don't use print for the purpose, but it's basically never the best choice. I assume Dada is conceived with the same mindset.
I disagree—println! is far more common for every day printf debugging than the log crate is. Do i have any evidence of this? No, but it takes less to type and is harder to mess up with log levels while working just as effectively.
Print is a debug function in some languages, but it's usually just stdout. You can add all kinds of logging libraries that wrap around print() with prefixes and control sequences to add colours, but I generally don't bother with those myself. In those circumstances, I would use something like logger.debug() instead of plain print(), though.

I personally find myself using print debugging as a last resort when the debugger doesn't suffice.

There seems to be a conflation in two subtly different types of debugging here—one is simply regurgitating state while tracking down a specific bug on a local machine and should not be committed and the other is intended to be checked into possibly production code with the assumption being sent to a log aggregator. I think both are valid techniques, but one clearly benefits from the use of the `log` crate more than the other.
> 'print()' should be async because it does IO

What if I want to do synchronous IO?

  • n2d4
  • ·
  • 1 month ago
  • ·
  • [ - ]
Just from glancing over the docs, that doesn't seem supported:

> Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.

Good riddance, IMO — never been a fan of blocking IO. Dada does have threads though, so I wonder how that works out. (Forcing async/await makes a lot more sense in JavaScript because it's single-threaded.)

Node lets you cheat with "synchronous" APIs: it stops the whole event loop. If you start making exceptions for "little" bits of IO like printing to the console or reading files, the async parts of your code are "async when someone hasn't done something synchronous". Doing a `readFileSync` even on a small file in a hot code path means you're meaningfully tanking the performance of your code.

What you're asking for is "stop running my code until I've finished printing to the console". That's what the `.await` does. Synchronous IO on `print()` would mean _everything in the whole application that logs_ suddenly blocks the whole application from doing _anything_ while the console is being written to, not just the currently running code.

If you want synchronous stop-the-world IO (like Python, where async/await is bolted on), you shouldn't choose a language based around async/await concurrency.

Then use .await?
  • est
  • ·
  • 1 month ago
  • ·
  • [ - ]
gevent handles async fine without the explicit async/await

.NET core will introduce something similar

The cost of it is that when you need to make something explicitly async, you have to wrap it into a greenlet in a much more involved way.

JavaScript lets you do it much more ergonomically.

Doesn't seem very involved; for example (cilk inspired syntax):

    let f = spawn { print() }  // fork
    ...
    wait f // join. f is a linear type
You only pay the complexity cost if you need it.
I'm coming from several languages (C, Perl, Java, JavaScript, Ruby, Python) and I strongly dislike the async/await thing.

At least let people change the default. For example

  await {
    // all the code here
    // runs synchronously 
    async {
      // except this part where
      // async methods will return early
      print("but not me!").await()
    }
  }
However the remark I make to people advocating for static typed Ruby holds for this language too: there are already languages like that (in this case await by default,) we can use them and let Dada do its own thing.
Also it immediately makes me wonder what `await` is... Is it a reference to a field of whatever the `print()` method is returning? Is it calling a method? If it's a method call without parentheses, how do I get a reference to a method without calling it?

(These kinds of questions are just unavoidable though; everyone will have these little pet things that they subjectively prefer or dislike.)

  • n2d4
  • ·
  • 1 month ago
  • ·
  • [ - ]
They borrowed it from Rust: `.await` is special syntax, roughly equivalent to `await print(...)` in other languages.

https://rust-lang.github.io/async-book/01_getting_started/04...

I wonder why not do `await print()` though? It reads more naturally as "wait for this" and is more clearly not a property access.
Some more details on `await` as a postfix operator rather than prefix

https://blog.ceejbot.com/posts/postfix-await/

Postfix operators are much more readable when chaining or composing. I used to write a lot of async C#, and it quickly gets tiresome to constantly have to write stuff like (await (await (...) ...), and reading such code requires jumping back and forth to unravel.

Amusingly, this is history repeating itself. These days we consider the X.Y syntax for object members quite natural, but historically if you look at the earliest examples, it was actually prefix. The first ALGOL-60 dialects to add records used functional notation, so you had to do Y(X). In ALGOL-68, they made it an operator instead (which allowed for proper namespacing), but it was still prefix: Y OF X; very straightforward and natural. But then people pretty quickly found out that (Y OF (X OF (...)) does not make for readable code in practice.

What I think they did wrong was require a period there - that is the part that makes it look like a property access. It would have been better as `print() await`, making it clear that it is just a postfix operator.

Yeah but for a new language I haven't seen before, I immediately wonder!
The reasoning with await is valid, it's an I/O call, but the await should maybe be hidden inside the print then?
It MIGHT or might NOT be valid, it depends. In a lot of cases, I might just want to print, but not yield "right here," but later (if at all in the current method). Further, writing to i/o is usually non-blocking (assuming the buffers are big enough for whatever you are writing), so in this case, the await literally makes no sense.
The canonical reason a language adds a utility like print over just offering the services of the underlying console is to make the Hello, World example as terse as possible.

IO is inherently extremely complicated, but we always want people to be able to do their simplified form without thinking about it.

A smart print() implementation may check if there's enough output buffer, and, if so, quickly return a Future which has already completed. A smart scheduler can notice that and not switch to another green thread.
One can argue that in the VAST majority of instances, you'll never ever be printing so much that you'll fill the buffer. If you need that kind of control, just get a direct stream to stdout, otherwise make print() block if it needs to.
You might not fill the buffer. But your program might crash before the buffer is flushed. In that case having prints explicitly block until the IO is completed is very valuable, especially when debugging. Nobody wants to waste time debugging their debug code.
Then just write to stderr, which is unbuffered (usually).
It's a leaky abstraction (https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...), but maybe there is no helping it and for some reason it is a necessary tradeoff for performance?
  • mtsr
  • ·
  • 1 month ago
  • ·
  • [ - ]
Maybe it’s actually a non-leaky abstraction because it makes the async-nature explicit. The alternative is hiding it, but it’s still going to affect your code, making that effectively a leaky abstraction.
Why is I/O so special that need to be explicitly marked across the call stack? What about memory allocation, that can arbitrarily delay a process? Should allocating functions be transitively annotated? What about functions that lock a mutex or wait on some synchronisation primitive? What about those that signal a synchronization primitive? What about floating points, that can raise exceptions? What about panicking functions?

Either all side effects should be marked or none should. Ret-connecting await annotations as an useful feature instead of a necessary evil is baffling.

I/O tends to be the slowest operations your software can perform, and also the riskiest, because you’re dependent on so many different underlying components working correctly. Everything from the kernel syscall, to the device driver, the device itself, and potentially devices attached to device that’s attached to your computer. In short IO operations are a complete shit show of possible problems, that can all occur while your software is suspended in a syscall.

Memory allocation by comparison are extremely quick, and generally very reliable. Your system’s memory subsystem isn’t a smorgasbord of different memory drivers and controllers. It one memory system, taking to one memory controller, via an API that been standardised for decades, and where every implementation of that API is basically tested to the extreme every time a computer turns on. That’s assuming your language even bother asking the OS for memory on every allocation, which it probably doesn’t. Most language runtimes request large blocks of memory from the OS, then allocate out of those block on demand. So most “allocating functions” never result in syscall at all.

Memory allocation can literally fail for reasons completely outside the control of the application (for example because the OS enforces a maximum virtual memory size on the application).

The fact the most allocations are fulfilled via internal pools is immaterial, at some point the allocator needs to ask the OS for more memory. This parallels the way that most I/O doesn't actually performs syscalls because of buffering.

Also allocations might end up performing arbitrary I/O indirectly if the OS needs to flush dirty pages to disk to free up memory.

Maybe there could be something like a aprint() wrapper, if the authors wanted to make the async nature explicit? Or something else, probably not this for one of the most common things a programmer must do.
Actually, surely you'd want an async print and a synchronous print with clear labels? aprint might be interpreted as an async print, not an awaited print, which is what I meant. Maybe this goes against "everything is async". Anyhow a better name could be print_awaited, so a "print_awaited" could be used directly without the extra syntax to await it (saving some autocomplete time?), it's still long though.
Why is aprint “non leaky” but print.await “leaky”?
Hey Steve, I wouldn't say that "print.await" is a leak abstraction. I think "print.await" is explicit and that's good, it communicates it's abstraction fairly clearly, presumably following a pattern used commonly in this, imagined language.

I suppose that a wrapper like "aprint" (a convenience function labelled async, like with an "a" prefix), would be a bit better than having people continually try using print, not await it, and not getting the expected output in stdout (or whatever stream it's sent to), while they are in the middle of trying to test something or otherwise get something working because I'm of the opinion that common things should be easy. Maybe "people would generally expect a print function to just work and not return a promise or something" is an abstraction? "aprint" might actually be the wrong name I'm not sure I've really thought about it right.

I agree with you personally on print.await; maybe I replied to the wrong person on this thread, ha!.
To be precise: the contract depends on the implementation. Here’s an example:

I write an in memory kv cache. It’s in memory so no async needed. Now I create a trait and implement a second version with file backing. Now the children are crying because async needs to be retroactively added and also why, makes no sense etc.

It does make sense if you want other types of resources, like time and memory, to also be part of the contract. Async annotations let you do this but hiding the asynchrony does not.
Make sense might be an overstatement but ok. Then why do functions with sync syscalls (ie file, timers or mutex ops) not expose the same contractual differences? They’re just regular functions in most languages including Rust.

Perhaps anything involving syscalls should be exposed and contractual. I doubt it, but maybe it’s important for some obscure ownership-of-resources reason. But then why the inconsistency between traditional and pooled syscalls? The only difference is whether the runtime sits in the kernel or in user space. The only one who should care is the runtime folks.

My take has been for years that this is throwing complexity over the fence and shaming users for not getting it. And even when they do get it, they Arc<Mutex> everything anyways in which case you are throwing the baby out with the bathwater (RAII, single ownership, static borrowing).

> Then why do functions with sync syscalls (ie file, timers or mutex ops) not expose the same contractual differences? They’re just regular functions in most languages including Rust.

Because the kernel doesn't expose that contract, so they don't have that behaviour.

> The only difference is whether the runtime sits in the kernel or in user space.

In other words, what contracts you have control over and are allowed to provide.

> My take has been for years that this is throwing complexity over the fence and shaming users for not getting it.

I'm sure how we got here would seem baffling if you're going to just ignore the history of the C10K problem that led us to this point.

You can of course paper over any platform-specific quirks and provide a uniform interface if you like, at the cost of some runtime overhead, but eliminating as much of this kind of implicit runtime overhead as possible seems like one of Rust's goals. Other languages, like Go, have a different set of goals and so can provide that uniform interface.

It's probably also possible to have some of that uniform interface via a crate, if some were so inclined, but that doesn't mean it should be in the core which has a broader goal.

> I'm sure how we got here would seem baffling if you're going to just ignore the history of the C10K problem that led us to this point.

I am not unaware of pooled syscalls. I worked on the internals of an async Rust runtime, although that should not matter for critiquing language features.

The archeological dig into why things are the way they can come up with a perfectly reasonable story, yet at the same time lead to a suboptimal state for a given goal - which is where the opinion space lies - the space where I’m expressing my own.

> but eliminating as much of this kind of implicit runtime overhead as possible seems like one of Rust's goals

Yes, certainly. And this is where the perplexity manifests from my pov. Async is a higher level feature, with important contractual ecosystem-wide implications. My thesis is that async in rust is not a good solution to the higher level problems, because it interacts poorly with other core features of Rust, and because it modularizes poorly. Once you take the event loop(s) and lift it up into a runtime, the entire point (afaik - I don’t see any other?) to abstract away tedious lower level event and buffer maintenance. If you just want performance and total control, it’s already right there with the much simpler event loop primitives.

In short, I fail to see how arguments for async can stand on performance merits alone. Some people disagree about the ergonomics issues, which I am always happy to argue in good faith.

>>[mutexes] are just regular functions in most languages including Rust.

>Because the kernel doesn't expose that contract, so they don't have that behaviour

Which OS are we talking about? Linux doesn't really have mutices as primitives. You can build async mutexes on top of eventfd and soon even on top of futexes with io_uring.

I’m on mobile so it’s tough to look up the details, but IIRC at least Windows?

Implementations of the standard library mutexes are here https://github.com/rust-lang/rust/tree/master/library/std/sr...

And of course it’s pthreads on Linux.

Windows has a lot of mutex and signaling primitives. Some of them can be asynchronously waited on. It also has an IoRing and I wouldn't be surprised if keyed events (the local futex equivalent) will be supported in the future.

As an aside, it is interesting that rust uses pthread_mutex for its standard library mutex. GCC/libstdc++ regrets that decision as its std::mutex is now way larger than it needs to be but it is now permanently baked in the ABI. I guess rust still doesn't guarantee ABI stability so the decision could be reversed in the future.

Just going to be honest here:

“Zero complexity print to the screen”

Is, quite possibly, the dumbest argument people make in favour of one language over another.

For experienced people, a cursory glance at the definitions should be enough. For new programmers, ignoring that part “for now” is perfectly fine. So to is “most programming languages, even low level ones, have a runtime that you need to provide an entry point to your program. In Java, that is public static void main. We will go over the individual aspect of this later. ”. This really is not that difficult, even for beginners.

Personally, I find more “cognitive load” in there not being an explicit entry point. I find learning things difficult when you’re just telling me extremely high level *isms.

This does surface the fact that its another await/async red/green function language though.

If they're already making it gradually typed and not low-level, I don't understand why they don't throw away the C ABI-ness of it and make it more like Ruby with fibers/coroutines that don't need async/await.

I'd like parametric polymorphism and dynamic dispatch and more reflection as well if we're going to be making a non-low-level rust that doesn't have to be as fast as humanly possible.

(And honestly I'd probably like to keep it statically typed with those escape hatches given first-class citizen status instead of the bolted on hacks they often wind up being)

[Ed: also rather than go back to object oriented, I'd rather see really easy composition, delegation and dependency injection without boilerplate code and with strongly typed interfaces]

Surely `public static void main` has less congnitive load than

    if __name__ == "__main__":
        main()
You don't have to do this, though. You can have an entrypoint.py that simply calls `main()` without that if. You don't even need to have modules if you want to, so you can write your functions and call them right after.
So in python, you need to understand not 1, but at least 3 different versions of “an entry point”, and to you, this is “less cognitive load”?

I had the same issue with Swift. There’s 30 ways to write the exact same line of code, all created by various levels of syntax sugar. Very annoying to read, and even more annoying because engaging different levels of sugar can engage different rulesets.

You don't "need" any of the entry points when you are beginning Python.

print("Hello World") is a perfectly valid and runnable Python code.

And when you are working on a small part of a large code base, you usually don't care about __main__ either. So yes, it's complexity but it's complexity that you don't need to encounter right away.

Python is intuitive off the bat. public static void main(String[] args) is not.

FWIW, in an upcoming version of Java you'll likely be able to do this:

    $ cat Hello.java
    void main() { System.out.println("Hello, world!"); }
    $ java --enable-preview --source 21 Hello.java 2>/dev/null
    Hello, world!
    $
This is currently a preview feature in Java 21 and 22.
Interestingly, C# (which began its life as a sort of Java/Delphi crossover syntactically) agrees. It used to be that you had to write:

   class Program {
      static void Main() {
         Console.WriteLine("...");
      }
   }
But these days, we can just do:

   Console.WriteLine("...");
  • xigoi
  • ·
  • 1 month ago
  • ·
  • [ - ]
Python has no concept of an “entry point”. You just run the program from start to end. What’s hard to understand about that?
It should have been implemented as `def __main__(): ...`
  • pama
  • ·
  • 1 month ago
  • ·
  • [ - ]
In Python if you carelessly print within a multiprocess part of an application you may end up getting a nonreproducible mess on stdout with multiple streams merged at random points. So the cognitive load in this example is that this new language is meant for multithreaded coding and can make multithreading easy compared to other languages.
  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
That's a great example of the "simplicity" of Python being anything but.
This is not at all unique to Python, and a footgun present in any language that allows multiple threads.

But if you're spawning multiple threads - in Python or any other language - you're already past any semblance of "simplicity", threads or no threads.

  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
That's a good point regarding print, however several other languages make multithreading easy. F#'s async is easy and just works as does Erlang and Elixir of course. Python's asyncio is barely even an async library, much less one that is simple.
F# async will not prevent you from causing data races.

Erlang does by basically not having shared mutable data.

  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
> F# async will not prevent you from causing data races.

That's true regarding side-effects and mutable data, but I wasn't saying that. It's still a much more sane and actually concurrent and asynchronous library than Python's asyncio, which is not actually concurrent, single threaded, and very difficult to work with. For example, there's nonway to laumch an asynchronous process and then later await it from synchronous code, whereas it's easy in F#.

It is a fundamental question in your language design. Some languages make side-effects explicit one way or the other. Other languages handles side-effects in a more implicit fashion. There's a tradeoff to be made here.
The other problem I see here is that starting and awaiting the task are too coupled.

In JavaScript calling the function would start the task, and awaiting the result would wait for it. This lets you do several things concurrently.

How would you do this in Dada:

    const doThings = async () => {
      const [one, two, three] = await Promise.all([
        doThingOne(),
        doThingTwo(),
        doThingThree(),
      ]);
    };
And if you wanted to return a thunk to delay starting the work, you would just do that yourself.
I assumed dada is using promises under the hood, just as JS is. If this is the case it could provide a static method for waiting on multiple promises, just as JS does.
This seems to say otherwise (specifically the "but not running" part):

Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.

From https://dada-lang.org/docs/dyn_tutorial

Yeah I'm not so sold, but mainly, I don't understand the logic here

If I'm declaring an async function, why do I need to await inside it?

like, if the return of an async function is a promise (called a thunk), why can't I do

async async_foo() { return other_async_foo(); } and it will just pass the promise?

Then you await on the final async promise. Makes sense?

  • ·
  • 1 month ago
  • ·
  • [ - ]
undefined
It's weird, I want pretty much the exact opposite of this: a language with the expressive type system and syntax of rust, but with a garbage collector and a runtime at the cost performance. Basically go, but with rusts type system.

I'm aware that there are a few languages that come close to this (crystal iirc), but in the end it's adoption and the ecosystem that keeps me from using them.

If you do not want to mess with Rust borrow checker, you do not really need a garbage collector: you can rely on Rust reference counting. Use 1.) Rust reference-counted smart pointers[1] for shareable immutable references and 2.) Rust internal mutability[2] for non-shareable mutable references checked at runtime instead of compile time. Effectively, you will be writing kind of verbose Golang with Rust's expressiveness.

[1] https://doc.rust-lang.org/book/ch15-04-rc.html

[2] https://doc.rust-lang.org/book/ch15-05-interior-mutability.h...

A language has a paved road, and when you go off of that road you are key with extreme annoyance and friction every step of the way.

You’re telling people to just ignore the paved road of Rust, which is bad advice.

No, not really. Firstly, there is no significant "friction" to using Rust smart pointers and internal mutability primitives, as those constructs have been added to Rust for a reason: to solve certain borrow checker edge cases (e.g., multiply interconnected data structures), so they are treated by the Rust ecosystem as first-class citizens. Secondly, those constructs make a pretty good educational tool. By the time people get to know Rust well enough to use those constructs, they will inevitably realize that mastering the Rust borrow checker is just one book chapter away to go through out of passion or boredom.
I find quite a lot of friction in being demanded to understand all of the methods, what they do, when you’d use them, why you’d choose one over another that does a slightly different thing, but maybe still fits.

The method documentation alone in reference counting is more pages than some entire programming languages. That’s beside the necessary knowledge for using it.

I don't think it's necessary to understand every single `Rc<T>` method[1] to use Rust smart pointers to learn Rust. Perhaps try a different learning resource such as "Rust By Example"[2], instead?

[1] https://doc.rust-lang.org/std/rc/struct.Rc.html

[2] https://doc.rust-lang.org/rust-by-example/std/rc.html

Reference counting and locks often is the easy path in Rust. It may not feel like it because of the syntax overhead, but I firmly believe it should be one of the first solutions on the list, not a last resort. People get way too fixed on trying to prove to the borrow checker that something or another is OK, because they feel like they need to make things fast, but it's rare that the overhead is actually relevant.
If it's syntactically messy, though, it's not really the easy path. Ergonomics matter just as much as semantics.

I do think that a superset of Rust that provided first-class native syntax for ARC would be much more popular.

Yes! Thank you! Dunno what it is about Rust that makes everyone forget what premature optimization is the root of all of.
The zero cost abstraction is so tantalizingly close enough to reach!

I tell everybody to .clone() and (a)rc away and optimize later. But I often struggle to do that myself ;)

I strongly disagree that smart pointers are "off the paved road". I don't even care to make specific arguments against that notion, it's just a terrible take.
  • Groxx
  • ·
  • 1 month ago
  • ·
  • [ - ]
It's telling people to avoid the famously hard meme-road.

Mutexes and reference counting work fine, and are sometimes dramatically simpler than getting absolutely-minimal locks like people seem to always want to do with Rust.

This is what Swift does, and it has even lower performance than tracing GC.

(To be clear, using RC for everything is fine for prototype-level or purely exploratory code, but if you care about performance you'll absolutely want to have good support for non-refcounted objects, as in Rust.)

An interesting point, but I would have to see some very serious performance benchmarks focused specifically on, say, RC Rust vs. GC Golang in order to entertain the notion that an RC PL might be slower than a GC PL. Swift isn't, AFAIK, a good yardstick of... anything in particular, really ;) J/K. Overall PL performance is not only dependent on its memory management, but also on the quality of its standard library and its larger ecosystem, etc.
Can you help me understand when to use Rc<T> instead of Arc<T> (atomic reference counter)?

Edit: Googled it. Found an answer:

> The only distinction between Arc and Rc is that the former is very slightly more expensive, but the latter is not thread-safe.

The distinction between `Rc<T>` and `Arc<T>` exists in the Rust world only to allow the Rust compiler to actually REFUSE to even COMPILE a program that uses a non- thread-safe primitive such as a non-atomic (thus susceptible to thread race conditions) reference-counted smart pointer `Rc<T>` with thread-bound API such as `thread::spawn()`. (Think 1-AM-copy-and-paste from single-threaded codebase into multi-threaded codebase that crashes or leaks memory 3 days later.) Otherwise, `Rc<T>`[1] and `Arc<T>`[2] achieve the same goal. As a general rule, many Rust interfaces exist solely for the purpose of eliminating the possibility of particular mistakes; for example, `Mutex<T>` `lock()`[3] is an interesting one.

[1] https://doc.rust-lang.org/rust-by-example/std/rc.html

[2] https://doc.rust-lang.org/rust-by-example/std/arc.html

[3] https://doc.rust-lang.org/std/sync/struct.Mutex.html

An Arc is an Rc that uses an atomic integer for its ref count. This ensures updates to its count are safe between threads, so the Arc can thus be shared between threads. In practice the two become identical assembly, at least on amd64 because most load and stores have memory ordering guarantees, but on other architectures the atomics can in fact be a tad slower. marking the operations as atomic also prevents the compiler from doing instruction reordering that might cause problems
You might enjoy F#. It's a lot like OCaml (which others have mentioned) but being part of the .NET ecosystem there are libraries available for pretty much anything you might want to do.
  • jug
  • ·
  • 1 month ago
  • ·
  • [ - ]
Yes, F# is an often forgotten gem in this new, brighter cross-platform .NET world. :)
:-) Is F# a contender outside the .NET world?
  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
What do you mean by "outside the .NET world"? F# is a .NET language (more specifically a CLR language). That question seems to be like asking "are Erlang and Elixir contenders outside of the BEAM world?" or "is Clojure a contender outside of the JVM world?".

F# being on top of the CLR and .NET is a benefit. It is very easy to install .NET, and it comes with a huge amount of functionality.

If you're asking if the language F# could be ported to another VM, then I'd say yes, but I don't see the point unless that VM offered similar and additional functionality.

You can use F# as if C# didn't exist, if that's what you mean, and by treating .NET and CLR as an implementation detail, which they effectively are.

This conversation could be referring to https://fable.io/

Other than that, the question is indeed strange and I agree with your statements.

  • kaba0
  • ·
  • 1 month ago
  • ·
  • [ - ]
You are generally right, but Clojure is a bad example, it is quite deliberately a “hosted” language, that can and does have many implementations for different platforms, e.g. ClojureScript.
  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
Yea, that's true. I forgot about that. I did think of Clojure CLR, but I don't get the impression that this is an all that natural or used implementation so I ignored it. ClojureScript is obviously much more used, although it is still a "different" language.

https://github.com/clojure/clojure-clr

There aren't many languages that can do server-side and browser-side well. F# is one of them!
Non .NET server-side?
You can do Node.js with F#

But these days .NET is a great server-side option. One of the fastest around, with a bit of tuning.

Fable compiles F# to Python, Rust, and Dart now, too, in addition to JS. I haven't tried Dart or Rust, but when I tried compiling its output to Python it was actually quite good!
You have awoken the ocaml gang
That is probably the closest, especially if they add ownership. That was the rust inventor's original goal, not just safety at minimal performance cost. I think ownership should be a minimal requirement for any future language, and we should bolt it on to any that we can. Fine grained permissions for dependency trees as well. I like static types mostly because they let me code faster, not for correctness, strong types certainly help with that though. Jit makes static types have some of the same ergonomic problems as dynamic ones though. I think some sort of AGI enslaved to do type inference and annotate my code might be ok, and maybe it could solve ffi for complex types over the c abi while it is at it.
There's no ownership concept, but in the JaneStreet fork, there is something resembling lifetimes[1].

[1]: https://blog.janestreet.com/oxidizing-ocaml-locality/

Yeah, ocaml is awesome! Frankly, if it had a more familiar syntax but the same semantics, I think its popularity would have exploded in the last 15 years. It's silly, but syntax is the first thing people see, and it is only human to form judgments during those moments of first contact.
  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
F# has better syntax but is ignored. :(
> Frankly, if it had a more familiar syntax but the same semantics

That's what ReasonML is? Not quite "exploding" in popularity, but perhaps more popular than Ocaml itself.

Interesting! I'm actually unaware of this, but will look into it.
Don't forget ReScript
Funny, because the semicolons and braces syntax is one of the things that puts me off Rust a bit, and I was not excited to see it in Dada
Syntax in programming languages are a question of style and personal preference. At the end of the day syntax is meant to help programmers communicate intent to the compiler. More minimalist syntax trades off less typing and reading for less redundancy and specificity. More verbose and even redundant syntax is in my opinion better for languages, because it gives the compiler and humans "flag posts" marking the intent of what was written. For humans, that can be a problem because when there are two things that need to be written for a specific behavior, they will tend to forget the other, but for compilers that's great because it gives them a lot of contextual information for recovery and more properly explaining to the user what the problem was. Rust could have optional semicolons. If you go and remove random ones in a file the compiler will tell you exactly where to put them back. 90% of the time, when it isn't ambiguous. But in an expression oriented language you need a delimiter.
It isn't necessarily my preference either, but it's the most familiar style of syntax broadly, and that matters more for adoption than my personal preferences do.
Yeah, I like the underlying ideas and I can deal with the syntax, but I wouldn't expect anyone else to :-/
Kotlin scratches that itch well for me. My only complaints are exceptions are still very much a thing to watch, and ADT declarations are quite verbose when compared with more pure FP languages.

Still, the language is great. Plus, it has Java interop, JVM performance, and Jetbrains tooling.

I've always wondered if global type inference wouldn't be a game changer. Maybe it could be fast enough with caching and careful language semantics?

You could still have your IDE showing you type hints as documentation, but have inferred types to be more fine grained than humans have patience for. Track units, container emptiness, numeric ranges, side effects and idempotency, tainted values for security, maybe even estimated complexity.

Then you can tap into this type system to reject bad programs ("can't get max element of potentially empty array") and add optimizations (can use brute force algorithm because n is known to be small).

Such a language could cover more of the script-systems spectrum.

Type inference is powerful but probably too powerful for module-level (e.g. global) declarations.

Despite type systems being powerful enough to figure out what types should be via unification, I don't think asking programmers to write the types of module declarations is too much. This is one area where forcing work on the programmer is really useful to ensure that they are tracking boundary interface changes correctly.

People accept manually entering types only at a relatively high level. It'd be different if types were "function that takes a non-empty list of even numbers between 2 and 100, and a possibly tainted non-negative non-NaN float in meters/second, returning a length-4 alphanumeric string without side effects in O(n)".
one of the other reasons global inference isn't used is because it causes weird spooky action at a distance - changing how something is used in one place will break other code.
I've heard that, but never seen an example*. If the type system complains of an issue in other code after a local change, doesn't that mean that the other code indeed needs updating (modulo false positives, which should be rarer with granular types).

Or is this about libraries and API compatibility?

* I have seen examples of spooky-action-at-a-distance where usage of a function changes its inferred type, but that goes away if functions are allowed to have union types, which is complicated but not impossible. See: https://github.com/microsoft/TypeScript/issues/15114

Try writing a larger OCaml program and not using interface files. It definitely happens.
I've never used OCaml, so I'm curious to what exactly happens, and if language design can prevent that.

If I download a random project and delete the interface files, will that be enough to see issues, or is it something that happens when writing new code?

If you delete your interface files and then change the type used when calling a function it can cascade through your program and change the type of the function parameter. For this reason, I generally feel function level explicit types are a fair compromise. However, making that convention instead of required (so as to allow fast prototyping) is probably fine.
  • iopq
  • ·
  • 1 month ago
  • ·
  • [ - ]
Just require it for public functions. Your own code can be as messy as you want unser the hood
> If the type system complains of an issue in other code after a local change, doesn't that mean that the other code indeed needs updating

The problem is when it doesn't complain but instead infers some different type that happens to match.

I dabbled a bit with ReasonML which has global type inference, and the error messages from the compiler became very confusing. I assume that's a big reason for not gaining more adoption.
You’ve just described scala.
Ha, no. Scala does contain this language the parent described, but alongside the huge multitudes of other languages it also contains.
  • kaba0
  • ·
  • 1 month ago
  • ·
  • [ - ]
Scala is an absolutely small language. It is just very expressive, but its complexity is quite different than, say, Cpp’s, which has many features.
In my view you have compared it to the only other language for which it is small by comparison :) But different strokes for different folks! I have nothing against Scala, its multi-paradigm thing is cool and impressive, it just isn't for me except by way of curiosity.
  • kaba0
  • ·
  • 1 month ago
  • ·
  • [ - ]
Could you list all the features you are thinking of?
I think all the links in the first two sections in the What Is Scala[0] docs give the flavor pretty well. It contains a full (and not small) set of OO language functionality, alongside an even more full-featured functional language.

There are a lot of adjectives you can use to describe Scala - mostly good ones! - but "small" just isn't one of them.

0: https://docs.scala-lang.org/tour/tour-of-scala.html#what-is-...

It's a personal preference but I'm not a big fan of JVM languages - big startup costs and not having one "true" runtime that is just compiled into the binary are my main reasons. I've spent so much time fiddling with class paths and different JRE versions...
That sounds… bad?

The whole point of rusts type system is to try to ensure safe memory usage.

Opinions are opinions, but if I’m letting my runtime handle memory for me, I’d want a lighter weight, more expressive type system.

Rust's type system prevents bugs far beyond mere memory bugs. I would even go as far as claiming that the type system (together with the way the standard library and ecosystem use it) prevents at least as many logic bugs as memory bugs.
  • kaba0
  • ·
  • 1 month ago
  • ·
  • [ - ]
Besides preventing data races (but not other kinds of race conditions), it is not at all unique. Haskell, OCaml, Scala, F# all have similarly strong type systems.
The type system was built to describe memory layouts of types to the compiler.

But I don’t think it prevents any more logic bugs than any other type system that requires all branches of match and switch statements to be implemented. (Like elm for example)

It prevents a lot more than that. For example, it prevents data race conditions through Send/Sync traits propagation.
I’m assuming by rust’s type system they mean without lifetimes. In which case it’s existed in lots of GC languages (OCaml, Haskell) but no mainstream ones. It isn’t really related to needing a GC or not.
You still want RAII and unique references, but rely on GC for anything shared, as if you had a builtin refererence counted pointer.

I do also believe this might be a sweet spot for a language, but the details might be hard to reconcile.

I haven’t used Swift so I might be totally wrong but doesn’t it work sort of like you describe? Though perhaps with ARC instead of true GC, if it followed in the footsteps of Objective-C.
Possibly, yes. I haven't used swift either though. Does it have linear/affine types?

Edit: I would also prefer shared nothing parallelism by default so the GC can stay purely single threaded.

Without lifetimes, Pins, Boxes, Clone, Copy, and Rc (Rc as part of the type itself, at least)
> The whole point of rusts type system is to try to ensure safe memory usage.

It isn't though. The whole trait system is unnecessary for this goal, yet it exists. ADTs are unnecessary to this goal, yet they exist. And many of us like those aspects of the type system even more than those that exist to ensure safe memory usage.

It is the first and foremost goal of every language choice in rust.

I think traits muddy that goal, personally, but their usefulness outweighs the cost (Box<dyn ATrait>)

I should’ve probably said “the whole point of rusts type system, other than providing types and generics to the language”

But I thought that went without saying

> It is the first and foremost goal of every language choice in rust.

It ... just ... isn't, though.

I mean, I get what you're saying, it's certainly foundational, Rust would look incredibly different if it weren't for that goal. But it just isn't the case that it is "the first and foremost goal of every language choice in rust".

I followed the language discussions in the pre-1.0 days, and tons of them were about making it easier and more ergonomic to create correct-if-it-compiles code, very often in ways that had zero overlap with safe memory usage.

Traits don't "muddy that goal", they are an important feature of the language in and of themselves. Same thing with the way enums work (as arithmetic data types), along with using Option and Result for error handling, rather than exceptions. Same thing with RAII for tying the lifecycle of other resources to the lifecycle of values.

The memory safety features interact with all these other features, for sure, and that must be taken into account. But there are many features in the language that exist because they were believed to be useful on their own terms, not in subservience to safe memory usage.

And it's not just about "providing types and generics to the language", it's a whole suite of functionality targeted at static correctness and ergonomics. The ownership/lifetime/borrowing system is only one (important!) capability within that suite.

The whole reason I got interested in Rust in the first place was because of the type system. I viewed it as "Haskell types but with broad(er) adoption". The fact that it also has this neat non-GC but memory safe aspect was cool and all but not the main sell for me.
I like Rust’s type system just fine but for me it’s types combined with language features like matching that draw me to Rust. When I was still learning I made an entire project using Arc<> with no lifetimes at all and it was actually a great experience, even if it’s not the textbook way to use Rust.
That's interesting - so you used Arc even if you didn't need thread safety?

Lifetimes elision works pretty well so you don't often need to specify lifetimes

It usually pops up when you use generics / traits (what concrete type does it match to?)

Honestly, I think syntax for Arc (and/or Rc or some generalization of the two) and more "cultural" support for writing in that style would have benefitted rust back when 1.0 was being finalized. But I think the cow is out of the barn now on what rust "is" and that it isn't this.
Yes, if you think about it, it's a bit weird that async gets first syntactical class treatment in the language but reference counting does not. A similar approach of adding a syntactical form but not mandating a particular impl could have been taken, I think.

Same for Box, but in fact Rust went the opposite way and turfed the Box ~ sigil.

Which I actually feel was a mistake, but I'm no language designer.

Async has to get first-class treatment in the syntax because the whole point of it is a syntax-level transformation, turning control flow inside out. You can also deal with Future<> objects manually, but that's harder. A special syntax for boxed variables adds nothing over just using Box<> as part of the type, similar for Rc<> (note that in any language you'll have to disambiguate between, e.g. cloning the Rc reference itself vs. duplicating its contents, except that Rust does it without having to use special syntax).
Yeah, but personally I think Rc/Arc is more deserving of syntax than Box!
A long time ago, it did have specialized syntax! We fought to remove it. There’s a variety of reasons for this, and maybe it would make sense in another language, but not Rust.
For Arc/Rc? I don't recall that! What was it? I recall it being `&borrowed`, `~boxed`, `@garbage_collected`.

Aaaah, I'm realizing in typing this that the `@foo` syntax was actually implemented via reference counting? I think my intuition at the time was that the intention was for those to eventually be backed by a mark-and-sweep GC, which I did think was a poor fit for the rest of the language. But as just a syntax for reference counting, I honestly think it might have been an ok fit.

Or maybe not, I'm ambivalent. But the syntax thing in my comment is more of a red herring for what I think is more of a cultural "issue" (to the small extent it is an issue at all), which is that most Rust projects and programmers seem to try to write in a style that defaults to only choose reference counting when they must, rather than using a style of optimizing them out if they show up in a hotspot during profiling.

Yes, I’m referring to @foo, which IIRC maybe in the VERY old days had a GC but from when I got involved in 2012 was reference counting, iirc.

Regardless of the specifics here, the same problems apply. Namely that it privileges specific implementations, and makes allocation part of the language.

Yeah, which circles back to the thread-starter's comment. My thought-experimental different version of rust would not mind shipping with a privileged implementation of garbage collection, or having allocation be part of the language.

It wouldn't be a good fit for projects like the ones at Oxide :) I'm very glad Rust itself exists, with good support for use cases like those!

  • iopq
  • ·
  • 1 month ago
  • ·
  • [ - ]
@gc references were Arc under the hood!
Totally makes sense! Not sure if I never knew or if that knowledge got lost in the sands of the past decade (or more, I think?) of time.
By "rusts type system" I mean enums with exhaustive pattern matching and associated structs, generics, conventional option and results types, and so on. None of that necessarily has anything to do with lifetimes as far as I understand.
There are a bunch of languages that fit-the-bill already. F#, OCaml, Haskell and Scala all come to mind.

You might have to lose a few parens though!

  • lawn
  • ·
  • 1 month ago
  • ·
  • [ - ]
Take a look at Gleam!

For me it seems like the perfect match.

The funny thing is that rust used to have things like garbage collection. For the kind of language Rust wanted to be, removing them was a good change. But there could always be a world where it kept them.

https://pcwalton.github.io/_posts/2013-06-02-removing-garbag...

> the kind of language Rust wanted to be

That has changed through the years: https://graydon2.dreamwidth.org/307291.html

  • iopq
  • ·
  • 1 month ago
  • ·
  • [ - ]
The @blah references were actually just Arc sugar
The expressive type system of Rust is backed by use-site mutability; use-site mutability is backed by single ownership; single ownership is made usable by borrow checking. There's a reason no language before Rust has been like Rust without being a functional language (and if that's no object, then you can use OCaml).
Totally agree! But I think it's a "both and" rather than an "either or" situation. I can see why people are interested in the experiment in this article, and I think your and my interest in the other direction also makes sense.
> but in the end it's adoption and the ecosystem that keeps me from using them.

Well, since you can't really use without high adoption even if something comes up with all features you want, you still won't be able to use it for decades or longer.

  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
Isn't that F#?
Yeah, same for a scripting language too - something like Lua but as expressive as Rust.

There is Rune, but like you mentioned the issue is adoption, etc.

TypeScript maybe?
If we are going that far, I suggest hopping off just one station earlier at Crystal-lang.
Yep, I think Crystal is the thing that is making a real go at essentially this suggestion. And I think it's a great language and hope it will grow.
Do you know how Crystal compares with Haxe? That's another one that might fit the requirements nicely.
I don't understand the Haxe documentation but it seems to also have some kind of algebraic data type.
  • k__
  • ·
  • 1 month ago
  • ·
  • [ - ]
Maybe ReScript?
You might like Kotlin. It'll also give you access to the entire JVM ecosystem.
I've written a lot of kotlin and it does indeed come very close! Now if only it wasn't bound to java's bytecode under the hood...

Whenever I've had to write kotlin for Android in the past I did quite enjoy it. It seems like the entire ecosystem is very enterprise-y when it comes to web though. Forced adherence to object orientedness and patterns like 100 files, 5 folders deep with 10 lines of code each keep cropping up in most kotlin projects I've seen.

Is that a blessing or a curse?
A blessing. Do you really want to write all the libraries from scratch for a new language? Do you want to come up with portable abstractions that work well on Windows? (and don't think you can skip that, people will ask).

Most people don't. That's not the fun part of language design.

Isn't that just the Boehm GC with regular Rust?
checkout Gleam.
... so OCaml or StandardML then
I do like the underlying ideas, and OCaml has been on my radar for a while. However, from my experience, functional languages with a big F always tend to feel a bit too "academic" when writing them to gain enough mainstream adoption.

Imperative code with functional constructs seems like the most workable approach to me, which rust, go, and other languages like kotlin, crystal etc. all offer.

Or Haskell!
Ocaml, yes, but not haskell. It does include these things the parent wants, but similar to how Rust ends up being quite "captured" by its memory semantics and the mechanics necessary to make them work, haskell is "captured" by laziness and purity and the mechanics necessary to make those work.

Also, syntax does actually matter, because it's the first thing people see, and many people are immediately turned off by unfamiliarity. Rust's choice to largely "look like" c++/java/go was a good one, for this reason.

I learned SML/NJ and OCaml a bit over 20 years ago and liked them, but when I tried my hand at Haskell my eyes glossed over. I get its power. But I do not like its syntax, it's hard to read. And yes, the obsession with purity.
Exactly right. I quite like haskell in theory, but in practice I quite dislike both reading and writing it.

But I like ocaml both in theory and practice (also in part due to having my eyes opened to SML about 20 years ago).

I actually preferred SML/NJ when I played with writing it, but OCaml "won" in the popularity contest. Some of the things that made OCaml "better" (objects, etc.) haven't aged well, either.

Still with OCaml finally supporting multicore and still getting active interest, I often ponder going back and starting a project in it someday. I really like what I see with MirageOS.

These days I just work in Rust and it's Ok.

Yep, right there with you. OCaml was only ever better in my view because it had developed enough libraries to be an actual pragmatic choice, unlike the other languages in that family. And yep, Rust is perfectly good too, IMO, but I do find that I rarely-to-never actually care about all the zero-cost abstractions that make it "hard" to use.
OCaml's object system is very nice, though. Structural typing with full inference is pretty awesome, and it also cleanly decouples subtyping from implementation inheritance.
or F#
  • ·
  • 1 month ago
  • ·
  • [ - ]
undefined
use scala
Right? One day... sigh
I like the idea, but please no "async/await". In a higher level language green threads like Go has are the correct answer IMO (and I'm not a Go fan, but I feel they got this part right).

Gradual typing is interesting, but I wonder if necessary. Static typing doesn't have to feel like a burden and could make it hard to reason about performance. I think more type inference would be better than gradually typed (like OCaml/ML).

Personally I love explicit coroutines for their flexibility. It's great to be able to multiplex a bunch of IO bound operations on a single thread, defining some in chains and other to execute in parallel. It's great to be able to easily decide when I want to wait for them all to finish, to do something like `select`, or to spin them off into the background. Rust's ownership occasionally makes this a bit more of a challenge than I would like it to be, but I certainly wouldn't trade the ability for a more "convenient" syntax.
Elixir and its Task module (https://hexdocs.pm/elixir/1.12/Task.html) is the best of both worlds here.

The fundamental concurrency system is green threads (similar to Go), which makes for a fantastic programming model where you spend your time writing linear blocking code, while actually having full parallelism. This is achieved both with the VM and the abstractions built on top like GenServers.

The Task module is a convenience that allows you to do "await" type work when that makes sense - because (as you describe) sometimes it does.

  • jerf
  • ·
  • 1 month ago
  • ·
  • [ - ]
"Gradual typing is interesting, but I wonder if necessary."

Open question: Are there any languages that can be used in a (decent [1]) REPL, that are strongly typed, but do not have Hindley–Milner-based type inference?

We have multiple concrete proofs that you can have a REPL with Hindley-Milner inference, but I'm curious if this is perhaps a concession to the difficulty of a strongly-typed REPL without a deeply inferable type system. But it's just an idle musing I'm throwing out to see the response to.

[1]: That is, for example, multiple people have put a Go REPL together, but anyone who has used a "real" REPL from the likes of Lisp, Haskell, Erlang, O'Caml, Python, etc., will not find it a "decent" REPL, as Go just can't have one for various reasons.

Scala has a REPL. It uses HM, but has limitations on type inference due to subtyping.
I'm not aware of any technical reasons why a given language would profoundly struggle to have a good REPL. I think it's mostly a matter of culture where REPLs aren't a priority in some language ecosystems because programmers there don't generally work that way.
  • jerf
  • ·
  • 1 month ago
  • ·
  • [ - ]
I'm not immediately aware of one either which is why I asked. HM does have its advantages but static languages are generally pretty clear on the type of an expression without annotations and just giving a variable the type of what it is set to achieves most of what you're looking for. It just occurred to me I couldn't name an instance of a static language without HM that does that, though. (At least I'm assuming LISP REPLs generally operate dynamically.)
One thing that does make it kind of weird is that a lot of statically typed languages that aren't from the ML family have a grammar top level that isn't imperative. In C#, Java, Dart, etc. The top level of a source file is purely declarative.

That can make a REPL sort of semantically weird. Do you allow variable declarations? If so, are they local or global? Do you allow class declarations in them? If so, can they access previous declarations?

All of that's easier in a dynamically typed language where the top level of a program is more or less just regular imperative code.

It's not insurmountable though, because you can be a little hand-wavey about semantics in a REPL if needed.

> but one that was meant to feel more like Java or JavaScript

Those are two very different feelings though!

  • pas
  • ·
  • 1 month ago
  • ·
  • [ - ]
compared to Scala and TS they are the same 'ewww' :S
I've written a bit of Rust, and I was left with mixed feelings, that seem to be still the same here: - loved the memory safety patterns when compared to the horrible things that you can do with C++ - found almost every thing where it was different to have a harder to parse syntax, that I could never get used to. The implicit return at the end of a statement for instance make it harder for me to visually parse what's being returned, since I really depend on that keyword.

Code in general is hard for me to mentally read. I know it sounds nitpicky, but to me all keywords should be obviously pronounceable, so something like "func" instead of "fn" would be mandatory. Also, using the permission keywords where I'd expect the type to be also seems a bit strange, as I'd imagine that keyword to prefix the variable -- that's just how I think though.

It does seem like less decorator magic and symbol-based syntax would make it easier for beginners to grasp.

I may sound like a curmudgeon, but I'd prefer only one type of language innovation at a time.

  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
> The implicit return at the end of a statement for instance make it harder for me to visually parse what's being returned, since I really depend on that keyword.

Cutting my teeth on Schemes and MLs and now working in Python, I have the complete opposite experience. It's jarring to have to specify return. What else would I want to do at the end of an expression? It seems tautological. The real reason it's there in Python is early return, which is even more dangerous and jarring.

I know it's not very FP, but you might explicitly not want to return anything and just modify the data.
  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
That's perfectly acceptable and expected. F# supports OOP and imperative just as much as it does functional programming. In the case of such functions and expressions, the value returned is of type `unit` with a single value of `()`. In F#, expressions that return `unit` have the value explicitly ignored if they are not the last expression in a code block. Other expressions returning non-`unit` values that aren't at the end of an expression will generate a warning. In such cases, for example where a function performs a required side effect and returns a value other than `()` but you don't need that value, you can use `|> ignore` to get rid of the warning since it says you are explicitly wanting to ignore the returned value.
Give it time. The syntax differences are real, but not insurmountable. I grew to prefer the location of the return type in function syntax despite having 15 years of C++ under my fingers.
I’m in the middle of working through The Rust Book, and I haven’t written any serious code with it yet, so interpret this through that lens.

When I looked at rust code before, it all seemed a bit weird. I couldn’t immediately understand it, but I’ve since come to realize this was because the dozen or so languages I can read well don’t really resemble rust, so my pattern matching was a bit off.

The more I learn about the syntax and core concepts, the more I’m learning that my brain absolutely loves it. Once I started to understand matches, lifetime syntax and the core borrowing mechanics, things clicked and I’m more excited about writing code than I’ve been since I taught myself GW-BASIC 25 years ago.

Just sharing this anecdote because I find it interesting how differently people experience languages. I also have an ongoing friendly debate with a friend who absolutely hates Python, while I rather enjoy it. I’ve tried to understand why he hates it, and he’s tried to understand why I like it. And it all seems to come down to hard-to-define things that just rub us in different ways.

I hope the benefits of rust find their way into more types of languages in the future.

Yeah, I think at some point we all have some internal wiring that is hard to change, while other parts are flexible.

For instance, I'm fine to write C++, Javascript or Python (with types at least). Ruby or Rust for some reason do rub me the wrong way, no matter how much I try to tough it out.

It's interesting that you mention Ruby, because Ruby is another language that just fits the shape of my brain for some reason.

I've always really struggled with the various Lisp variants.

> Code in general is hard for me to mentally read. I know it sounds nitpicky, but to me all keywords should be obviously pronounceable,

Have you tried Ada?

> so something like "func" instead of "fn" would be mandatory.

What about no keywords, like:

    x => ...func body
> Have you tried Ada?

I have tried Pascal in that sphere, which was on the too verbose side.

Arrow notations like in JS/Typescript are fine to parse for me. Some clear symbols are actually easier to read than an unpronounceable alphanumeric.

IMO the main thing that made Pascal verbose is begin...end for all compound statements. If you ditch that - as even Wirth himself did in the next iteration, Modula-2 - the rest is much more palatable. Consider:

   (* Pascal *)
   if a > b then
   begin
       blah;
       blah;
   end
   else
   begin
       blah;
       blah;
   end;

   -- Ada
   if a > b then
       blah;
       blah;
   else
       blah;
       blah;
   end if;

   // C
   if (a > b) {
       blah();
       blah();
   } else {
       blah(); 
       blah();
   }
Pascal is clearly very verbose here, but the other two are pretty similar.

That said I think that punctuation to delimit blocks makes more sense because it makes program structure clearly distinct. Although by the same token I think I'd prefer conditionals and loops to also be symbolic operators, so that there are no keywords inside executable code, only identifiers. Basically something like "?" instead of "if", "@" instead of "while" etc.

{} are isomorphic to 'begin' and 'end' in that snippet, though. Part of the reason why early programming languages were comparatively heavy on keywords is that the symbol set was not standardized across machines, so you couldn't count on some symbols being available. It's why C still supports <% %> as digraphs for curly braces, or <: :> for square brackets. Also why languages such as COBOL go as far as supporting syntax like DIVIDE X INTO Y, because the machine character set might not have a slash or divide sign.
They are, but it makes all the difference in practice. You can write "then begin" and "end else begin", but I haven't ever seen Pascal code written in that style, and I can understand why - it's just much less readable.

OCaml is nice in that "begin" and "end" are aliases for "(" and ")", so you can use whichever one makes the most sense for readability.

Anyway, the point is that the way to go is either 1) make structured programming constructs implicitly compound, so that you don't need braces for multi-statement bodies, or 2) make those braces as compact as possible so that being explicit is not so painful.

> all keywords should be obviously pronounceable

I hear you. Internally, I always pronounced "var" as rhymes with "care", but then a colleague pronounced it "var" as rhymes with "car". I think the same guy pronounced "char" like char-broiled, whereas I had thought of it like "care". And he would say "jay-SON" for json, which I pronounced like Jason.

How would you feel about a notation that is not meant to be pronounced at all?

+Employee {

}

where + indicates a class definition.

:Rename() where : indicates a class method.

~DoStuff() where ~ indicates a static function

Interesting, for all the examples you gave, I'd prefer to see a keyword, since you'd need to use a lot of symbols for a lot of different things in this way. I do find arrow notation or other symbol for lambdas fine, since it's a unique case and not a generic type of using symbols for keywords.
  • ordu
  • ·
  • 1 month ago
  • ·
  • [ - ]
> I know it sounds nitpicky, but to me all keywords should be obviously pronounceable, so something like "func" instead of "fn" would be mandatory.

Keywords only? How about function names like strspn or sbrk? And how do you feel about assembly language, using mnemonics like fsqrt or pcmpeqd?

BTW, thinking about it, I notice, that I need all these lexemes to be pronounceable too, and I have my ways to pronounce sbrk or pcmpeqd. Probably if I do it aloud no one will understand me, but it doesn't matter because these pronunciations are for internal use only.

I'm not sure how many codebases started after 2010 I've seen that have "pcmpeqd" as a method name. This is something I think makes sense only in highly optimized code, but in business logic it's a pain to read.
The absence of GC, makes embedded Rust a joy. It can be easily attached to other programs like Erlang with NIFs, Javascript and web pages with Web Assembly and Emacs with command line execution. Micro-controllers as well of course.

I do consider the lightning start-up speed of a program to be one of the killer features of Rust. Rust with garbage collection throws away one of it's biggest advantages compared to every other language around.

Garbage collection doesn't make program startup slow. Look at Go, or Java compiled with native-image.
GC doesn't affect startup time.

The slow startup you associate with GC language implementations like ones for Java and JavaScript mostly comes from JIT warmup.

  • zem
  • ·
  • 1 month ago
  • ·
  • [ - ]
don't think of it as rust with garbage collection, think of it as a GC language with features borrowed from rust
If the claim that its performance will be similar to Rust's if you add type annotations, this could become a really attractive language!

As easy as JavaScript to write, as fast as Rust when the extra effort to write it justifies it.

Still super weird, because the garbage collector tax that is avoided by the borrow checker that's decidedly not gone isn't all that big to begin with.

But perhaps it's a viable "training wheels" approach for getting used to borrow-checker friendly patterns? And I guess a scripting interpreter option that is fully rust-aware in terms of lifetimes could be truly golden for certain use cases, even if it turns out to be completely hostile to users not fully in tune with the underlying Rust. Sometimes "no recompile" is very important.

I wonder if the genesis story of the project might be hidden in "Dada has a required runtime": perhaps it started with the what-if of "how nice could we make Rust if we abandoned our strict "no runtime!" stance and went for making it runtime-heavy like e.g. Scala"? Then the runtime pulls in more and more responsibility until it's easier to consume a raw AST and from there it's not all that far to making types optional.

Garbage collection is actually faster than generic malloc for allocating memory because it can work as a simple bump allocator. And there are ways to handle collection efficiently. Malloc is also not entirely deterministic in performance because the heap can get fragmented. Either way, if latency matters you end up having to care about (de)allocation patterns at the app level.
I think most people's concern with GC is not the allocation side of it? And in any case alloca blows them all out of the water.

> if latency matters you end up having to care about (de)allocation patterns at the app level.

Yes, and you want tools that allow you to precisely describe your needs, which might be more difficult if a lumbering brute is standing between you and your data.

  • hgs3
  • ·
  • 1 month ago
  • ·
  • [ - ]
Most GC concerns surround "stop the world" tracing collectors that "pause" the program unpredictably for an indeterminate time. These collectors are bad for real-time and soft real-time applications for obvious reasons. They would do better with a reference counting collector because its GC bookkeeping is smeared predictably across the run of the program.

Most languages don't use reference counting because most applications are either one-shot console apps, GUI apps, or web apps - the latter two operate on "bursts" of input. If your app operates in "bursts" then a tracing GC is superior since you can delay collection until the app is waiting for more input. Real-time apps don't have a moment where they "wait" therefore they should prefer reference counting for its predictable performance.

You have to use the right GC algorithm for the right job, but unfortunately programming language runtimes don't usually offer a choice.

Agreed, it seems weird to me to avoid garbage collection in a high level language. It is one thing to use escape analysis to avoid creating garbage, but mallocing every object is going be slower than a well tuned GC.
I would say that in practice 80% of the values go on the stack, 18% in Box and 2% in an Arc/Rc. That's why Rust code tends to be fast: the common cases are really easy to represent with the borrow checker, so a hypothetical GC doesn't have to perform escape analysis to see if it is ok to specialise those allocations, while the more uncommon cases can still be represented, albeit more verbosely than a GCd language would need.
> mallocing every object

So don't do that then? Put most things on the stack. It's far faster than any allocation.

In the context of escape analysis (which would place many objects on the stack) that should have read: mallocing every _heap_ object
  • kaba0
  • ·
  • 1 month ago
  • ·
  • [ - ]
That’s also true of a GCd language though.
There’s no claim to be as easy as javascript to write.

Rusts “difficulty” stems from its single ownership model, and this model is “different” not “easier”.

https://dada-lang.org/docs/dyn_tutorial/permissions

I personally find the semantics of javascript a lot harder to internalize than rust due to its scoping and very unintuitive object system. I can't imagine this is any more difficult than that.
I agree but most modern JS doesn't use prototypal inheritance.

JS has plenty of bad parts you shouldn't use. Classes are the main one.

Even checking the class of a given object feels quite flimsy to me, although this perhaps isn't a huge problem in a coherent codebase.

This isn't meant to be an attack on javascript as a worthwhile tool to learn, by the way, just a testament to the fact that it's not an easy language to master in the slightest.

  • iopq
  • ·
  • 1 month ago
  • ·
  • [ - ]
I had a coworker who wrote

    return 
        { ... }
And JS helpfully inserted a semi-colon after return

This is a feature you need to know about and you have to go out of your way not to get rekt by it

From what it looks more expressive and seems intuitive to me.
Interesting. The about page starts with a quote from Dada Manifesto 1918. The quote is changed, as explained in a footnote: "Updated to use modern pronouns."

Here is the original quote:

I speak only of myself since I do not wish to convince, I have no right to drag others into my river, I oblige no one to follow me and everybody practices his art in his own way, if be knows the joy that rises like arrows to the astral layers, or that other joy that goes down into the mines of corpse-flowers and fertile spasms.

changed to :

I speak only of myself since I do not wish to convince, I have no right to drag others into my river, I oblige no one to follow me and everybody practices their art their own way.

dada-lang about: https://dada-lang.org/docs/about/ Tzara, Dada Manifesto 1918: https://writing.upenn.edu/library/Tzara_Dada-Manifesto_1918....

I don't understand the comment in the method print_point in the class Point of the tutorial.

    [...]
    # This function is declared as `async` because it
    # awaits the result of print.
    async fn print_point(p) {
        # [...]
        print("The point is: {p}").await
    }

    [...]
From the first page of the tutorial:

> Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.

So, what it boils down to is that async/await are like lazily computed values (they work a bit like the lazy/force keywords in Ocaml for instance, though async seems to be reserved for function declarations). If that is the case, that method "print_point" is forcing the call to print to get that thunk evaluated. Yet, the method itself is marked async, which means that it would be lazily evaluated? Would it be the same to define it as:

    fn print_point(p) {
        print("The point is: {p}")
    }
If not, what is the meaning of the above? Or with various combinations of async/await in the signature & body? Are they ill-typed?

I wish they'd provide a more thorough explanation of what await/async means here.

Or maybe it is a dadaist[0] comment?

[0] https://en.wikipedia.org/wiki/Dada

I think they didn't do a very good job explaining it. Await doesn't just mean "please run this thunk", it means "I am not going to deal with this thunk, can someone come and take over, just give me the result in the end".

What this means, concretely, in Rust, is `.await` will return the thunk to the caller, and the caller should resume the async function when the result is ready. Of course the caller can await again and push the responsibility further back.

The most important thing here, is that `.await` yields the control of execution. Why does this matter? Because IO can block. If control wasn't given up, IO will block the whole program; if it is, then something else will have a chance to run while you wait.

So, you mean that this thunk is produced by the async function, and the await keyword will run it asynchronously?

In other words, print produces a thunk, and print_point also produces a thunk, and when await is used on the later, it is executed asynchronously, which will execute the print also asynchronously. So we end up with 3 different execution context: the main one, a one for each "await"?

What is the point of this, as opposed to executing the thunk asynchronously right away? Also, how does one get the result?

  • ordu
  • ·
  • 1 month ago
  • ·
  • [ - ]
> the await keyword will run it asynchronously?

From the point of view of print_point await executes the thunk synchronously, print_point execution stops and awaits for print to finish it work. But a callee of print_point might want it to run print_point asynchronously, so print_point is an async fn, and callee can do something more creative then to await.

Thank you.

So, it seems I had understood the principle the way you explain it, but the code comment on print_point (as I indicated in the top of this thread) isn't saying that.

I suspect they're heavily relying on intuition coming from Rust, where both of those forms are okay. The one from TFA is sugar for your version. This works fine as long as there is only a single await point, otherwise you have to transform the syntax into a bind, which you might not be able to legally do manually (in Rust at least) if you hold a borrow across the await point.
> What if we were making a language like Rust, but one that was meant to feel more like Java or JavaScript, and less like C++?

That would be Swift?

Interesting experiment. But it does seem like there are increasing numbers of languages trying to crowd into the same spaces.

Yes, but languages don't compose well. For example, you can't take Swift because you like all the things the language does and then add in first class support for Linux and Windows. Thus, anytime a language doesn't align with EVERY thing you need it to do... a new language evolves.
The main idea is that leases are an easier concept to understand than borrowing and lifetimes?

I don't think it will be, it sounds like a concept of similar complexity and it won't make it an "easy language".

People are scared of Typescript, so a typed language with an extra ownership concept will sound exactly like rust in terms of difficulty.

Not that I get the reputation of Rust being hard, even as a complete novice I was able to fight a bit with the compiler and get things working.

The gradually typed approach is nice but it just sounds like smarter type inference would get you 99% there while keeping the performance (instead of using runtime checks).

Not having unsafe code is both interesting and limiting. I keep all my code safe for my own mental sanity but sometimes having bindings to some big library in c/c++ is convenient (eg Qt or OpenCV).

Yeah, it's not clear who this is for. If you can handle ownership, this doesn't seem to have many benefits over Rust. If you can't handle ownership, and don't mind a runtime, just use Swift, which seems to be the main inspiration for Dada's syntax.
  • k__
  • ·
  • 1 month ago
  • ·
  • [ - ]
Reminds me of Dyon, a scripting language for Piston.

It's dynamically typed and uses lifetimes instead of a garbage collector.

https://github.com/PistonDevelopers/dyon/issues/173

My opinionated opinion: programming languages have three goals. 1) Be safe, don't make mistakes 2) Be expressive: The Sapir Whorf hypothesis 3) Be easy to use.

JavaScript (new) is +++2, and ++3 (to me). Java is +++1 & --2, -3.

Personally I like OO ("has a") but think Class-ification,("is a") is a big mistake. Take a truck and a car. Start replacing the pieces of the car with pieces from the truck. When is a car not a car? Arbitrary. When does the car have a tail gate, a flat bed?

That is not a joke. Classes and Types are a way to think (Sapir Whorf) that makes you do strange things.

The interesting thing about Dada is the "borrow", "share" etc and seems very good. But then instead of wrapping it in a class can't we just use an Object?

It's a bit frustrating that I have to click around hunting for an example of the syntax.

If you are making a new programming language, please do us a favor and put your Hello World syntax example right on the landing page.

  • platz
  • ·
  • 1 month ago
  • ·
  • [ - ]
I thought the docs themselves were a work of conceptual art i.e. the docs themselves were "dadaist" and were the main point of the site
what does "creators of rust" mean? Graydon? niko? pcwalton?
  • fbn79
  • ·
  • 1 month ago
  • ·
  • [ - ]
ah, thank you! I couldn't find anything on the website itself, should have thought to look at the code.
It is indicated in the last paragraph of the FAQ: https://dada-lang.org/docs/about/faq (It is indeed hard to find though!)
Also Brian Anderson (brson) was/is a significant rust contributor
I think there isn't enough research into languages with affine/linear typing (the property of some types that they can't be copied - which is partly what the borrow checker ensures in Rust). I'm super sold on it for enhancing safety. Vale with its "Higher RAII"[0] is the only other example I was aware of until seeing this.

Rust is great but being an early adopter has made its usability imperfect in places. Combining substructural typing with gradual typing and OOP is interesting here. Others in this thread have also mentioned wanting a higher-level Rust, like Go. I'd like to see a purely functional Rust. Haskell has experimental support for linear typing[1], but I suspect a language built with it from the ground up would be very different.

[0]: https://verdagon.dev/blog/higher-raii-7drl

[1]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/line...

Have you look at Austral?[0] It is a system language with affine type as the main selling point. I really, really like its specifications, in particular the design goals and rationale.

[0]: https://austral-lang.org/

[1]: https://austral-lang.org/spec/spec.html#goals

I was posting that "a_print" (an auto running async printer) might be better for one of the most common features a programmer uses.

I'm coming from Python, and for situations when people grasp for C/++ kind of performance and control, I think people are aware of the need for high performance memory safe languages that are easier to use than Rust but with many of Rust's benefits being at least possible. So I am quite excited by thinking from Dada and the people who are behind Rust and I'm also intrigued by SerenityOS's Jakt language project. I hope the insecure "C code problem" has a smooth migration path that let's C/++ devs, Typescript devs, and others make progress quickly in a powerful way. What other sort of alternative languages are there, among Dada's aspirations? Jakt? Vale (I understand a lead dev is poorly, so it's slowed a bit lately)? D? Go? Obviously AI will have a big impact. What language is going to have a big impact in this space?

  • hgs3
  • ·
  • 1 month ago
  • ·
  • [ - ]
I've dabbled in PL research before, and not to downplay the work as this is just my opinion, but the Rust ownership system is too invasive. It prevents entire classes of architectures and algorithms from being directly represented without auxiliary structures and other code contortions. I don't think it is an approach that should be mimicked.
The Rust ownership system was built to be compositional - an architecture or algorithm must not just be "safe" in isolation, it must also preserve that safety when interacting with the rest of the system, even as either part gets modified or evolves further. Practically speaking, this is where many proposed architectures that may indeed appear "safe" run into issues. (If you can't provide these guarantees, the idiomatic approach in Rust is to isolate that part within an unsafe module, and document the expectations that said module imposes wrt. the rest of the system.)
Why the name and the logo? Couldn't find info about it.

Otherwise, the idea of creating something close to rust but without the complexity sounds interesting. I just hope they don't stick to that name.

They’re both referencing the Dada artistic movement.
Thx
  • w-m
  • ·
  • 1 month ago
  • ·
  • [ - ]
I was thoroughly confused reading the Dada Manifesto[0], starting with the non-existent German meanings of the word dada and getting much stranger from there. Until I found out at the very bottom that it's a riff on a 1916 dada manifesto.

[0]: https://dada-lang.org/blog/manifesto

Logo might be inspired by Marcel Duchamp's _Bicycle Wheel_

see https://www.moma.org/collection/works/81631

++ that gradual types are included … Rust does an awesome job of the contract you want with your compiler is enforce types strongly for maximum safety, but this is not always the appropriate trade off for small projects … i prefer a language with a sliding scale where i can grow a codebase and be more strict if and when it becomes more mission critical (gradual typing is the answer for this imo and is well done in eg rakulang)
Dada looks "almost" great! I especially like that it targets wasm; I believe wasm is the future of frontend and also backend with wasi. However, I believe that being gradually typed is a mistake. Dart started being optionally typed and then they made it fully statically typed for very good reasons. I hope they learn from Dart's experience there.
Feel like there would be fewer posts and languages like this if people just took 10 seconds to read about modern C#.
  • iopq
  • ·
  • 1 month ago
  • ·
  • [ - ]
If only it played well with Linux, but Mono is always playing catch-up
Feel like there would be fewer posts and languages like this if people just took 10 seconds to read about modern C#
The latest dotnet plays reasonably well with Linux and supports latest framework and language version.
`sudo apt-get install dotnet-sdk-8.0`

I wonder what that does...

> As of right now, Dada doesn't really exist, though we have some experimental prototypes...

> OK, from here on out I'm going to pretend that Dada really exists in its full glory.

This is a brilliant trick I only recently discovered in another context: write the docs first, to validate the user experience of a novel system.

During architectural reviews, I'm often the annoying person grilling the team on the customer experience. If you don't start from how the customer will interact with it, how are you going to create anything ergonomic?

All too often, the engineering has started at "customers want to be able to do $x", and that's the last time the customer was part of the consideration. The solutions are great, but often miss out on what it'd be like to actually use it, as a customer. Lots of foot guns, and expectations of knowledge that a customer couldn't possibly have unless they had as much understanding of what happens under the hood as the engineers did, etc.

If you're cloning parts of TypeScript, please bring along mapped & conditional types!

Feel free to experiment on the syntax, but the concept is amazing, especially if you're planning on being dynamic-ish.

> Dada is object-oriented, though not in a purist way

Are classes cool again?

Only the upper ones.
  • sn9
  • ·
  • 1 month ago
  • ·
  • [ - ]
This might be a naive question, but rather than targeting WASM directly, why not target MLIR (used by projects like Mojo) and use LLVM to compile to WASM?
Amazing way to test if a new computer language is viable! I think that more people (including myself) should take this approach to language design.
Very cool art movement: it's essentially a prototypical form of Photoshop/meme culture as protest against the Nazis. John Heartfield is my favorite Dada artist.

Perhaps his most famous piece is a photo of Hitler captioned "millions stand behind me," showing a donor passing him stacks of cash.

  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
I thought the creators of Rust were creator, singular, in Graydon Hoare. Are they involved with this?
It's complicated. Graydon Hoare is the founder of the Rust project and led it through its early days, but left the project before its 1.0. The Rust that shipped shares the tenets of the Rust that Graydon created, but it is also different in some foundational ways. Graydon has written about this (and also why he left the Rust project) online.

https://graydon2.dreamwidth.org/307291.html

https://www.reddit.com/r/rust/comments/7qels2/i_wonder_why_g...

Niko has been involved with Rust since before they had conceived of the borrow checker and the entire commiter list could be fed with a pizza.
  • bmitc
  • ·
  • 1 month ago
  • ·
  • [ - ]
I would distinguish creator(s) from early key contributors and developers. I'm not aware of the full history of Rust but was under the understanding that Graydon Hoare is the creator of the language.
Graydon does not have any commits in the repository.
Not new, launched in 2021 apparently.
love this quote

> I speak only of myself since I do not wish to convince, I have no right to drag others into my river, I oblige no one to follow me and everybody practices their art their own way.

> Tristan Tzara, "Dada Manifesto 1918”

Sounds a bit like Python but with actual & optional runtime type checking?
  • ubj
  • ·
  • 1 month ago
  • ·
  • [ - ]
Interesting, but the intent seems similar to Chris Lattner's new Mojo language which arguably has similar characteristics and is further along in its development.

https://docs.modular.com/mojo/

It sounds like mockups to create a programming language
  • ·
  • 1 month ago
  • ·
  • [ - ]
undefined
Mojo but for Javascript
See also Graydon Hoare's Rust-that-could-have-been: https://graydon2.dreamwidth.org/307291.html
What is Dada?
exactly
  • VMG
  • ·
  • 1 month ago
  • ·
  • [ - ]
the contrast of the links against the light background is pretty poor
don't know why you were down voted - this is totally correct, the site only looks right in dark mode
no upfront types, for me this is unusable sadly
Literally every single program you ever created has needed types?

I've probably written 100s of tiny little utility programs that are a couple of lines at most, and wouldn't need types for any of those, it would just add extra verbosity for no gain.

Gradual lifetimes could be interesting, though.
^this!

In garbage-collected languages, please give me gradual / optional annotations that permit deterministic fast freeing of temps, in code that opts in.

Basically to relieve GC pressure, at some modest cost of programmer productivity.

This unfortunately makes no sense for small bump-allocated objects in languages with relocating GC, say typical java objects. But it would make a lot of sense even in the JVM for safe eager deterministic release of my 50mb giant buffers.

Another gradual lifetime example is https://cuda.juliagpu.org/stable/usage/memory/ -- GPU allocations are managed and garbage collected, but you can optionally `unsafe_free!` the most important ones, in order to reduce GC pressure (at significant safety cost, though!).

Changing a quote to change "his" to "theirs" seem like a very Rust community thing to do.

> Updated to use modern pronouns.

https://dada-lang.org/docs/about/

Just for context, the quoted manifesto was originally written in French (https://monoskop.org/images/3/3b/Dada_3_Dec_1918.pdf). In that version, that particular sentence is gender neutral: "tout le monde fait son art a sa façon".

I would say that that their updated quote is a more accurate translation of the original than the English translation they initially used.

That does make it a lot better, but at the same time makes the footnote even more of a deliberate statement that could have been left out.
> That does make it a lot better

Why?

  • ·
  • 1 month ago
  • ·
  • [ - ]
undefined
Non-native-speaker take: I don't care, it just reads a bit "weird" as I learned English before and "theirs" was plural... but I am adaptable.

As long as the meaning of the quote isn't changed I couldn't care less and it seems very important to some people.

What I personally dislike though is the whole "Ask me my pronouns" thing... like "No, I don't care about your gender or sex, as long as I am not interested in a romantic relationship with you - just tell me how to call you and I'll do it, but more effort? No!"

To elaborate a bit more: I find the topic exhausting not because I hate freedom of choosing your own gender or anything like that, but because I personally do not care about your gender at all.

I don't care about your religion, your skin color, your culture, your sex, your gender... I care about individual people but I don't reduce them to a certain aspect of their existence.

Now I find the whole "Ask me my pronouns" exhausting and also rude because it puts pressure on me to ask you about a topic I am not interested in. Like: I get it, there is social pressure, I understand that you're not happy with certain social "norms" and developments. I totally get that and I guess we are on the same side for many of them, but I still do not care about your gender until I care about your gender. (And also, I don't live in your country probably, so your local politics may be of interest, but I still don't like being forced to talk about them before I can ask a genuine question on e.g. a technology topic ;))

Just write his/her/theirs... and I will respect your choice. I will not think less of you, nor will I put you on a pedestal for something I do not care about.

  • rob74
  • ·
  • 1 month ago
  • ·
  • [ - ]
The "singular they" has been a thing in English for a long time (since the 14th century according to Wikipedia - https://en.wikipedia.org/wiki/Singular_they). I'm a non-native speaker as well, and wasn't taught about it in school either (maybe that has changed in the meantime?). The first time I consciously noticed it is probably in the Sting song If You Love Somebody Set Them Free (https://en.wikipedia.org/wiki/If_You_Love_Somebody_Set_Them_...), which was his debut solo single, so is already quite old itself (1985, although I probably heard it later).
Yeah, it's been commonly used when the object (in the grammatical sense) would typically have a gender but it is unspecified. In fact, it's so common native English speakers don't even notice they're doing it. (Which produces a steady stream of unintentional humor from those pretending it isn't a thing.) The usage as a sign of respect for specific non-binary and other gender non-confirming people is more modern (2009 is apparently the first recorded example). Although to a great extent, it doesn't matter. Language evolves over time and dictionary definitions necessarily trail usage.

The wikipedia article is quite detailed and will probably supply more information that anyone particularly wanted. https://en.wikipedia.org/wiki/Singular_they

There is also "singular you" in English.
> Non-native-speaker take: I don't care, it just reads a bit "weird" as I learned English before and "theirs" was plural...

Non-native speaker too, I find it easier to adjust in English compared to my native language (French), probably because the language is less engrained in me. I embraced the English neutral pleural - it's even convenient - but I found myself a bit more annoyed with the so called French *écriture inclusive", such as "les étudiant.e.s sont fatigué.e.s". Not really pretty IMHO. We could find something better..

> I learned English before and "theirs" was plural..

Its been done before. See royal plural

https://en.wikipedia.org/wiki/Royal_we

Pluralis Majestatis exists in my mothertongue too ;)
People who want others to ask them their pronouns before referring to them, what is your reason for doing so?
As an avid fan of Rust, the Rust community is incredibly cringe about this topic.
  • myko
  • ·
  • 1 month ago
  • ·
  • [ - ]
Seems more cringe to complain about pronouns
I think we’re all saying the same things here. No one wants to hear complaints about pronouns
  • rob74
  • ·
  • 1 month ago
  • ·
  • [ - ]
Actually, what I don't want is a discussion about pronouns being at the top of the comments. Aren't there more interesting topics to discuss about this project?!
Indeed, far more annoying is that someone grousing about the TFA update about pronouns "seems like a very [subsection of] HN community thing to do."

And yet here I am, N levels down in this thread, griping about it. Oops.

  • ·
  • 1 month ago
  • ·
  • [ - ]
undefined
  • twic
  • ·
  • 1 month ago
  • ·
  • [ - ]
Changing it is perfectly reasonable, but specifically advertising that you've done it in a footnote is extremely Rust community.
The policing of what other people should or shouldn't care about or advertise they care about is very boorish to me, but of course here I am doing the same thing.
On iPad, I see a back-link from the footnote, but no forward link to it from the corrupted quotation.
  • umvi
  • ·
  • 1 month ago
  • ·
  • [ - ]
Is it still a quote in that case?
I've understood that you can modify quotes but you have to indicate the bits that you've modified. So "It enforces memory safety" becomes "[Rust] enforces memory safety" if you want to modify a quote to make more sense out of context.
Yes, it's just a different translation of the original French quote.
> modern pronouns.

In my native language it is quite old-school. Really polite form.

Singular they and their is six centuries old in English.
It is also a very Dadaism thing to do. Da!
I mean, Tristan Tzara is Romanian and so it seems likely that this thought isn't originally English anyway, so it's reasonable for a modern writer to choose to translate it in a more inclusive way. I expect that an early English Bible and a modern one likewise make different choices about whether a text that's clearly about people generally and isn't concerned with sex or gender - should say "He" or "They" / "His" or "Their" and so on.
  • ·
  • 1 month ago
  • ·
  • [ - ]
undefined
I also noticed this, along with the warnings that Dada doesn't really exist yet (which is fine, thanks for the heads up).

I predict this project will have its priorities backwards. There's a group of people who want to govern a programming language project, and inject their ideology into that structure, and maybe there's another group of avid language designers in there too. I think there are more of the first.

How do you “inject ideology” in a programming language?

Compiler error if the variable name is sexist?

  • ·
  • 1 month ago
  • ·
  • [ - ]
undefined
> How do you “inject ideology” in a programming language?

I was just talking about the project community and governance. It would be hard to imagine injecting ideology into the language itself.

Oh wait, nevermind...

https://doc.rust-lang.org/beta/nightly-rustc/tidy/style/cons...

This is part of rustc’s test suite. It affects nobody but rustc.
Pournelle’s Iron Law of Bureaucracy
It was only a matter of time before even the creators of Rust grew tired of it being another C++.
Every time I see a new language, I immediately check if it uses significant white space like Python. If it doesn’t, I sigh sadly and dismiss it.
This is such a weird take... I just want to know why is that SO important to you? For me that is one of the things that I like the least with Python.
Curious, I have the very opposite reaction, although I tolerate Python, but only for the massive amount of libraries and huge community. But as a language? Meh

What makes you so reliant on significant white space that any language without is a automatic dismissal?

  • csjh
  • ·
  • 1 month ago
  • ·
  • [ - ]
Why?
Just so everyone knows. Graydon is not in the list of contributors on GitHub.

https://github.com/dada-lang/dada/graphs/contributors

https://github.com/graydon

Not this again. How many languages do we need? I am having a good time with Go and Python!