The crazy thing with reading this and the comments, is that it seems like we all have been daydreaming about completely different versions of a "high level rust" and what that would look like. For me I'd just want a dynamic run time + simpler types (like "number" or a single string type), but it looks like other people have a completely different list.
Some of the additions here, like a gradual type system, I would really not want in a language. I love gradual type system for stuff like Python, Typescript and Elixir, but those are cases where there's already so much untyped code written. I would way prefer the guarantees of a fully static typed codebase from day one when that's an option.
I loved this, both as a teaching aid, and as an eye-opener that programming languages are just an accumulation of choices with different trade-offs that can all go different ways and result in something that works, perhaps a bit better or perhaps worse, or perhaps just a bit more toward or away from one's own personal taste.
This is sort of the lisp idea of "create the language that is natural to write the application in, then write the application". Or Ruby's take on that idea, with more syntax than lisp but flexible and "human" enough to be DSL-ish.
But somewhat to my sadness, as I've progressed in my career, I've realized that the flip side of this is that, if you're building something big it will require lots of people and all those people will have different experiences and personal preferences, so just picking one standard thing and one standard set of defaults and sticking with that is the way to go. It reduces cognitive overhead and debate and widens the pool of people who can contribute to your effort.
But for personal projects, I still love this idea of thought experimentation around the different ways languages and programming environments could work!
A poor-fitting language is terrible for abstract thinking, on the other hand an internally-consistent and domain appropriate language can unlock new ways of looking at problems.
I'd highly recommend Martin Fowler's work on DSLs to see how you can apply these techiques to large projects.
So in addition to the skill of creating a DSL, you need the skills of thoroughly documenting it, training other people to use it, creating tools for it, and explaining the benefits in a way that gets them more excited than just using an existing Boring Old Programming Language.
Which is certainly possible. You can get non developers excited if they can use it for answering their own questions or creating their own business rules, for example. But it's a distinct skill set from cranking out code to solve problems. It requires a strong understanding of the UX (or DX) implications of this new language.
In the same way, what you listed isn’t a distinct skill set from cranking out code to solve problems. What happens is those skills are now levered. Not the good vibes “leveraged”. I mean in the “impact to success and failure is 100x baseline” sense. If those skills are in the red, you get wiped out.
This is the stopping at 90% problem somebody just posted a link to in another thread. edit: https://austinhenley.com/blog/90percent.html
So you will always be working to bring people along to your design choices and help them understand the (relative) value, or risk forever languishing as the sole contributor.
You don't get buy-in with technology, you get buy-in with ideas.
A question of philosophy: If you have all that, don't you already have a DSL, using a deep embedding in the host language?
With a new DSL, you need to create all of that yourself.
A good one is internally consistent so that users have predictability when writing and reading usage. A good one uses the minimum number of distinct elements required for the problem domain. A good one lets the user focus on what they're trying to do and not how they need to do it.
The principles apply regardless of interface. Physical device, software UI, API, DSL, argument over HN, it's all a continuum.
If you’re an experienced programmer coming in to SAS, your vocabulary for the next LONG time is going to consist primarily of “What The Fuck is this shit?!?”
It wanted to be more than just SQL, but the interoperability with other languages was awful, we couldnt even work with it like SQLite.
This is one of his papers in Pl-Detective and Mystery for anyone interested: https://www.researchgate.net/publication/220094473_PL-Detect...
[1] https://blog.brownplt.org/2018/07/05/mystery-languages.html
0: https://cs.brown.edu/~sk/Publications/Papers/Published/pkf-t...
Languages, with first class values, pattern matching, rich types, type inference and even fancy RTS, often can be embedded in Haskell.
For one example, it is very much possible to embed into Haskell a Rust-like language, even with borrow checking (which is type-checking time environment handling, much like linear logic). See [1], [2] and [3].
[1] http://blog.sigfpe.com/2009/02/beyond-monads.html
[2] https://www.cs.tufts.edu/comp/150FP/archive/oleg-kiselyov/overlooked-objects.pdf
[3] http://functorial.com/Embedding-a-Full-Linear-Lambda-Calculus-in-Haskell/linearlam.pdf
Work in [3] can be expressed using results from [1] and [2], I cited [3] as an example of what proper type system can do.These results were available even before the work on Rust began. But, instead of embedding Rust-DSL into Haskell, authors of Rust preferred to implement Rust in OCaml.
They do the same again.
Both strategies are very hard, but one of then is "build a prototype in a weekend" hard and one of them is "build a prototype is a month" hard.
I'm still not that good at it, but my best strategy to date is to try to work in a restricted environment of both the host and the target that are nearly the same.
For one example, again, borrowed fom Haskell universe, is Atom [1]. It is a embedded language to design control programs for hard real-time systems, something that is as far from Haskell area of application as... I don't know, Sun and Pluto?
You need to use existing facilities (type checking, pattern matching combinators, etc) of a good implementation language as much as possible before even going to touch yacc or something like that.
why? and how much does it matter, if the goal is to have a compiler/interpreter? (as I assume is the case with Dada, and was with Rust)
It seems very hard to pick a good 'number' (JS's is actually a double-precision 64-bit IEEE 754 float, which almost never feels right).
The SimpleLanguage tutorial language has a bigint style number scheme with efficient optimization:
https://github.com/graalvm/simplelanguage/blob/master/langua...
Python at least has the big num integers, but its "float" is just Rust's f64, the 64-bit machine integers again but wearing a funny hat, not even a decimal big num, and decimal isn't much better.
The bigger problem is precision. The right thing there, IMO, is to default to infinite (like Python does for ints but not floats), with the ability to constrain as needed. It is also obviously useful to be able to constrain the denominator to something like 10.
The internal representation really shouldn't matter that much in most actual applications. Let game devs and people who write ML code worry about 32-bit ints and 64-bit floats.
Note parent said "at least the reals"
I see no reason why I would need to represent it as an accurate fraction instead of two numbers, even if I divide it later I can always just do that inaccurately since the exact aspect ratio doesn't matter for resizing images (<1% error won't affect the final result)
- Leopold Kronecker
print("...").await
I'm coming from Python, and I can't help but ask: If my goal as a programmer is to simply print to the console, why should I care about the await? This already starts with a non zero complexity and some cognitive load, like the `public static void main` from Java.
Because that isn't ever anyone's actual goal? Optimizing a language design for "Hello World" doesn't seem like a particularly useful decision.
In python print adds your string to the stdout buffer, which eventually gets written out to the console. But it not guaranteed, if you want that guarantee you need to call flush on the stdout IO handler.
Dada has taken the approach of making blocking IO operations explicit, rather than purely implicit. The result is that if you want to perform an IO operation, you need to explicitly say when you want to block, rather than allowing an elaborate stack of runtime buffers dictate what happens immediately, what happens later, and what going to block further code execution.
In short this completely exists in other languages like Python, you’ve simply not be aware of it, or aware of the nuanced was in which it fails. But if your someone whose wasted hours wrestling with Python IO system, then you’ll appreciate the explicit nature of Dada’s IO system.
Arguing that a particular design choice is silly from a purely ergonomic or usage perspective is kind of absurd, given you literally can’t use the language at all. Maybe waiting for a mutex, signalling a semaphore, or waiting for page faults should require an await (although it’s literally impossible for a language to await a page fault without a lot cooperation from the OS). The whole point of Dada is you can make those design choices, then work through the consequences. Maybe it turns out they’re actually fantastic ideas, once you get past the surface level issues, or maybe they’re terrible ideas. But once again, Dada doesn’t actually exist! It’s a thought experiment to test all kinds of ideas, but without having to waste all the time and energy building those ideas to discover issues that could have been discovered by simply having a conversation.
Exactly because Dada is just a thought experiment it interesting to push the boundaries of such a model in various ways with low stakes.
Constructively, I'm partial to full coroutine abstractions that hide the asynchronouness of functions or on the other side of the spectrum, to full effect systems.
I think async is a necessary evil on some high performance languages (like rust, C++, certainly not python), but elevating it to an actually desirable from the ergonomic point of view seems just wrong.
Pure concurrency has the advantage that you can do away with a lot of complex and nuanced synchronism mechanisms, by virtue of the fact you’re not actually sharing memory between parallel lines of computation. Something that makes writing correct concurrent code quite a bit easier and friendlier. In that world have clear and explicit markers of when a function call might result in you yielding to the event loop, and thus memory values you read previously in your function might change, is very handy. Especially if there’s a nice mechanism to delay that yield until after you’ve completed all your important memory operations, and have confidence that your computed values are consistent.
Coroutines are certainly a different approach to the same problem, so hide the blocking nature of functions in a neat way, but at the cost of requiring you to start using those complex synchronisation primitives, because any function call or operation might result in an implicit yield, and thus you can’t predict when memory values might change.
My first introduction to async/await was the Twisted framework for Python. It wasn’t called async/await back then, the principles were identical. Twisted made it possible to write pretty high performance concurrent network code in python, in a way that was very understandable, and _safe_, without resorting to multi-threading or multi-processing. As a result I think the async/await in Python is actually a really good idea. When used correctly, it makes it possible to write really nice, performant, code in python, without resorting to parallelism, and all the pitfalls that come with that (I.e. synchronisation). Async/await provides a nice middle ground between full on parallelism, and single threaded blocking code with no ability to interleave IO operations.
That's the bit I reject. In practice you are saying that there are no reentrancy concerns between preemption points (async calls), and marking those points explicitly in code help avoid bugs.
I claim that:
a) there can be are reentrancy issues even in regions only performing sync calls (due to invoking callbacks or recursively reentering the even loop)
b) if we value explicit markers for reentrancy, we should be instead explicitly marking reentrancy-unsafe regions with atomic blocks instead of relying on the implicit indirect protection of within-async regions.
With async you still have to think about synchronization, but instead of being explicit and self-documenting in code with synchronized objects and critical sections, you have to rely on the implicit synchronization properties. And if you write code that rely on it, how do you protect against that code breaking when someone (including ourselves) shuffles some async calls around in three months?
In fact something like rust, thanks to lifetimes and mutable-xor-shared, has much better tools to prevent accidental mutation.
Don't get me started on python, asyncio is terrible in its own unique ways (the structured concurrency in trio and similar seems much saner, but I have yet to use).
[Sorry for continuing this argument, as you can tell I have quite strong opinions on it; feel free to ignore me if you are not interested]
You would hope if this was done properly you wouldn’t be using callbacks at all, because that’s kinda throwing away any benefits async/await provides, and reentrancy to the event loop should require explicit markings.
> if we value explicit markers for reentrancy, we should be instead explicitly marking reentrancy-unsafe regions with atomic blocks instead of relying on the implicit indirect protection of within-async regions.
In principle yeah kinda agree, but a lot of code isn’t reentrancy-safe, I would argue that most code isn’t reentrancy-safe, unless carefully designed to be reentrancy-safe. So a programming model that implicitly makes most code protected against reentrancy does provide value, and make it harder for difficult to debug concurrency bugs to slip in.
I don’t necessarily think it’s the “best” approach, I much prefer people actually think about their code carefully, and be explicit with their intentions. But that requires quite a lot a of experience, and understanding the detailed nuances that come with parallelism, something many engineers simply don’t have. So I think there’s a lot of value in programming paradigms that provide additional protection against those types of errors, but without forcing the type of rigour that Rust does, due to the learning barrier it creates.
I suspect that async/await is here to stay for now, but I very much see it as part of a continuum of concurrency paradigms that we’ll eventually move past, once we find better ways of writing safe concurrent code. But I suspect we’ll only really discover those better ways once we fully explored what async/await offers, and completely understand the tradeoffs it forces.
if I want a short/simple program it would be cool to put a stanza on the top of the file to auto-await all futures.
'print()' should be async because it does IO. In the real world most likely you'd see the output once you yield.
Normally print isn’t a debug message function, people just use it like that. (it normally works on non debug builds)
If your console app is writing output to any device, it must, for instance, handle errors gracefully.
That means, at least in Rust, write! rather than print!.
From the docs: "Use println! only for the primary output of your program. Use eprintln! instead to print error and progress messages."
There is no point in juggling around Result types if a failure means that you can not recover/continue execution. That is in fact exactly what panic! is intended for [1].
[1]: https://doc.rust-lang.org/book/ch09-03-to-panic-or-not-to-pa...
Now if you fail to write to stderr, yeah, that's a good reason for a console app to panic. The onus is on the user to provide something that is "good enough" in that case.
IMO the real problem is that print() etc defaults to stdout historically, but is used mostly for diagnostic information rather than actual output in practice, so it should really go to stderr instead. This would also take care of various issues with buffering etc.
There's value in the Hello, World and println-debugging style print, even if it should be eschewed in most general contexts.
Consider something as trivial as `cat foo >readonly_file` to see why.
You can and should recover from bog standard IO failures in production code, and in any case you'd better not be panicking in library code without making it really clear that it's justified in the docs.
If your app crashes in flames on predictable issues it's not a good sign that it handles the unpredictable ones very well.
I personally find myself using print debugging as a last resort when the debugger doesn't suffice.
What if I want to do synchronous IO?
> Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.
Good riddance, IMO — never been a fan of blocking IO. Dada does have threads though, so I wonder how that works out. (Forcing async/await makes a lot more sense in JavaScript because it's single-threaded.)
What you're asking for is "stop running my code until I've finished printing to the console". That's what the `.await` does. Synchronous IO on `print()` would mean _everything in the whole application that logs_ suddenly blocks the whole application from doing _anything_ while the console is being written to, not just the currently running code.
If you want synchronous stop-the-world IO (like Python, where async/await is bolted on), you shouldn't choose a language based around async/await concurrency.
.NET core will introduce something similar
JavaScript lets you do it much more ergonomically.
let f = spawn { print() } // fork
...
wait f // join. f is a linear type
You only pay the complexity cost if you need it.At least let people change the default. For example
await {
// all the code here
// runs synchronously
async {
// except this part where
// async methods will return early
print("but not me!").await()
}
}
However the remark I make to people advocating for static typed Ruby holds for this language too: there are already languages like that (in this case await by default,) we can use them and let Dada do its own thing.(These kinds of questions are just unavoidable though; everyone will have these little pet things that they subjectively prefer or dislike.)
https://rust-lang.github.io/async-book/01_getting_started/04...
Amusingly, this is history repeating itself. These days we consider the X.Y syntax for object members quite natural, but historically if you look at the earliest examples, it was actually prefix. The first ALGOL-60 dialects to add records used functional notation, so you had to do Y(X). In ALGOL-68, they made it an operator instead (which allowed for proper namespacing), but it was still prefix: Y OF X; very straightforward and natural. But then people pretty quickly found out that (Y OF (X OF (...)) does not make for readable code in practice.
What I think they did wrong was require a period there - that is the part that makes it look like a property access. It would have been better as `print() await`, making it clear that it is just a postfix operator.
IO is inherently extremely complicated, but we always want people to be able to do their simplified form without thinking about it.
Either all side effects should be marked or none should. Ret-connecting await annotations as an useful feature instead of a necessary evil is baffling.
Memory allocation by comparison are extremely quick, and generally very reliable. Your system’s memory subsystem isn’t a smorgasbord of different memory drivers and controllers. It one memory system, taking to one memory controller, via an API that been standardised for decades, and where every implementation of that API is basically tested to the extreme every time a computer turns on. That’s assuming your language even bother asking the OS for memory on every allocation, which it probably doesn’t. Most language runtimes request large blocks of memory from the OS, then allocate out of those block on demand. So most “allocating functions” never result in syscall at all.
The fact the most allocations are fulfilled via internal pools is immaterial, at some point the allocator needs to ask the OS for more memory. This parallels the way that most I/O doesn't actually performs syscalls because of buffering.
Also allocations might end up performing arbitrary I/O indirectly if the OS needs to flush dirty pages to disk to free up memory.
I suppose that a wrapper like "aprint" (a convenience function labelled async, like with an "a" prefix), would be a bit better than having people continually try using print, not await it, and not getting the expected output in stdout (or whatever stream it's sent to), while they are in the middle of trying to test something or otherwise get something working because I'm of the opinion that common things should be easy. Maybe "people would generally expect a print function to just work and not return a promise or something" is an abstraction? "aprint" might actually be the wrong name I'm not sure I've really thought about it right.
I write an in memory kv cache. It’s in memory so no async needed. Now I create a trait and implement a second version with file backing. Now the children are crying because async needs to be retroactively added and also why, makes no sense etc.
Perhaps anything involving syscalls should be exposed and contractual. I doubt it, but maybe it’s important for some obscure ownership-of-resources reason. But then why the inconsistency between traditional and pooled syscalls? The only difference is whether the runtime sits in the kernel or in user space. The only one who should care is the runtime folks.
My take has been for years that this is throwing complexity over the fence and shaming users for not getting it. And even when they do get it, they Arc<Mutex> everything anyways in which case you are throwing the baby out with the bathwater (RAII, single ownership, static borrowing).
Because the kernel doesn't expose that contract, so they don't have that behaviour.
> The only difference is whether the runtime sits in the kernel or in user space.
In other words, what contracts you have control over and are allowed to provide.
> My take has been for years that this is throwing complexity over the fence and shaming users for not getting it.
I'm sure how we got here would seem baffling if you're going to just ignore the history of the C10K problem that led us to this point.
You can of course paper over any platform-specific quirks and provide a uniform interface if you like, at the cost of some runtime overhead, but eliminating as much of this kind of implicit runtime overhead as possible seems like one of Rust's goals. Other languages, like Go, have a different set of goals and so can provide that uniform interface.
It's probably also possible to have some of that uniform interface via a crate, if some were so inclined, but that doesn't mean it should be in the core which has a broader goal.
I am not unaware of pooled syscalls. I worked on the internals of an async Rust runtime, although that should not matter for critiquing language features.
The archeological dig into why things are the way they can come up with a perfectly reasonable story, yet at the same time lead to a suboptimal state for a given goal - which is where the opinion space lies - the space where I’m expressing my own.
> but eliminating as much of this kind of implicit runtime overhead as possible seems like one of Rust's goals
Yes, certainly. And this is where the perplexity manifests from my pov. Async is a higher level feature, with important contractual ecosystem-wide implications. My thesis is that async in rust is not a good solution to the higher level problems, because it interacts poorly with other core features of Rust, and because it modularizes poorly. Once you take the event loop(s) and lift it up into a runtime, the entire point (afaik - I don’t see any other?) to abstract away tedious lower level event and buffer maintenance. If you just want performance and total control, it’s already right there with the much simpler event loop primitives.
In short, I fail to see how arguments for async can stand on performance merits alone. Some people disagree about the ergonomics issues, which I am always happy to argue in good faith.
>Because the kernel doesn't expose that contract, so they don't have that behaviour
Which OS are we talking about? Linux doesn't really have mutices as primitives. You can build async mutexes on top of eventfd and soon even on top of futexes with io_uring.
Implementations of the standard library mutexes are here https://github.com/rust-lang/rust/tree/master/library/std/sr...
And of course it’s pthreads on Linux.
As an aside, it is interesting that rust uses pthread_mutex for its standard library mutex. GCC/libstdc++ regrets that decision as its std::mutex is now way larger than it needs to be but it is now permanently baked in the ABI. I guess rust still doesn't guarantee ABI stability so the decision could be reversed in the future.
“Zero complexity print to the screen”
Is, quite possibly, the dumbest argument people make in favour of one language over another.
For experienced people, a cursory glance at the definitions should be enough. For new programmers, ignoring that part “for now” is perfectly fine. So to is “most programming languages, even low level ones, have a runtime that you need to provide an entry point to your program. In Java, that is public static void main. We will go over the individual aspect of this later. ”. This really is not that difficult, even for beginners.
Personally, I find more “cognitive load” in there not being an explicit entry point. I find learning things difficult when you’re just telling me extremely high level *isms.
If they're already making it gradually typed and not low-level, I don't understand why they don't throw away the C ABI-ness of it and make it more like Ruby with fibers/coroutines that don't need async/await.
I'd like parametric polymorphism and dynamic dispatch and more reflection as well if we're going to be making a non-low-level rust that doesn't have to be as fast as humanly possible.
(And honestly I'd probably like to keep it statically typed with those escape hatches given first-class citizen status instead of the bolted on hacks they often wind up being)
[Ed: also rather than go back to object oriented, I'd rather see really easy composition, delegation and dependency injection without boilerplate code and with strongly typed interfaces]
if __name__ == "__main__":
main()
I had the same issue with Swift. There’s 30 ways to write the exact same line of code, all created by various levels of syntax sugar. Very annoying to read, and even more annoying because engaging different levels of sugar can engage different rulesets.
print("Hello World") is a perfectly valid and runnable Python code.
And when you are working on a small part of a large code base, you usually don't care about __main__ either. So yes, it's complexity but it's complexity that you don't need to encounter right away.
Python is intuitive off the bat. public static void main(String[] args) is not.
$ cat Hello.java
void main() { System.out.println("Hello, world!"); }
$ java --enable-preview --source 21 Hello.java 2>/dev/null
Hello, world!
$
This is currently a preview feature in Java 21 and 22. class Program {
static void Main() {
Console.WriteLine("...");
}
}
But these days, we can just do: Console.WriteLine("...");
But if you're spawning multiple threads - in Python or any other language - you're already past any semblance of "simplicity", threads or no threads.
Erlang does by basically not having shared mutable data.
That's true regarding side-effects and mutable data, but I wasn't saying that. It's still a much more sane and actually concurrent and asynchronous library than Python's asyncio, which is not actually concurrent, single threaded, and very difficult to work with. For example, there's nonway to laumch an asynchronous process and then later await it from synchronous code, whereas it's easy in F#.
In JavaScript calling the function would start the task, and awaiting the result would wait for it. This lets you do several things concurrently.
How would you do this in Dada:
const doThings = async () => {
const [one, two, three] = await Promise.all([
doThingOne(),
doThingTwo(),
doThingThree(),
]);
};
And if you wanted to return a thunk to delay starting the work, you would just do that yourself.Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.
If I'm declaring an async function, why do I need to await inside it?
like, if the return of an async function is a promise (called a thunk), why can't I do
async async_foo() { return other_async_foo(); } and it will just pass the promise?
Then you await on the final async promise. Makes sense?
I'm aware that there are a few languages that come close to this (crystal iirc), but in the end it's adoption and the ecosystem that keeps me from using them.
[1] https://doc.rust-lang.org/book/ch15-04-rc.html
[2] https://doc.rust-lang.org/book/ch15-05-interior-mutability.h...
You’re telling people to just ignore the paved road of Rust, which is bad advice.
The method documentation alone in reference counting is more pages than some entire programming languages. That’s beside the necessary knowledge for using it.
I do think that a superset of Rust that provided first-class native syntax for ARC would be much more popular.
I tell everybody to .clone() and (a)rc away and optimize later. But I often struggle to do that myself ;)
Mutexes and reference counting work fine, and are sometimes dramatically simpler than getting absolutely-minimal locks like people seem to always want to do with Rust.
(To be clear, using RC for everything is fine for prototype-level or purely exploratory code, but if you care about performance you'll absolutely want to have good support for non-refcounted objects, as in Rust.)
Edit: Googled it. Found an answer:
> The only distinction between Arc and Rc is that the former is very slightly more expensive, but the latter is not thread-safe.
[1] https://doc.rust-lang.org/rust-by-example/std/rc.html
F# being on top of the CLR and .NET is a benefit. It is very easy to install .NET, and it comes with a huge amount of functionality.
If you're asking if the language F# could be ported to another VM, then I'd say yes, but I don't see the point unless that VM offered similar and additional functionality.
You can use F# as if C# didn't exist, if that's what you mean, and by treating .NET and CLR as an implementation detail, which they effectively are.
Other than that, the question is indeed strange and I agree with your statements.
But these days .NET is a great server-side option. One of the fastest around, with a bit of tuning.
That's what ReasonML is? Not quite "exploding" in popularity, but perhaps more popular than Ocaml itself.
Still, the language is great. Plus, it has Java interop, JVM performance, and Jetbrains tooling.
You could still have your IDE showing you type hints as documentation, but have inferred types to be more fine grained than humans have patience for. Track units, container emptiness, numeric ranges, side effects and idempotency, tainted values for security, maybe even estimated complexity.
Then you can tap into this type system to reject bad programs ("can't get max element of potentially empty array") and add optimizations (can use brute force algorithm because n is known to be small).
Such a language could cover more of the script-systems spectrum.
Despite type systems being powerful enough to figure out what types should be via unification, I don't think asking programmers to write the types of module declarations is too much. This is one area where forcing work on the programmer is really useful to ensure that they are tracking boundary interface changes correctly.
Or is this about libraries and API compatibility?
* I have seen examples of spooky-action-at-a-distance where usage of a function changes its inferred type, but that goes away if functions are allowed to have union types, which is complicated but not impossible. See: https://github.com/microsoft/TypeScript/issues/15114
If I download a random project and delete the interface files, will that be enough to see issues, or is it something that happens when writing new code?
The problem is when it doesn't complain but instead infers some different type that happens to match.
There are a lot of adjectives you can use to describe Scala - mostly good ones! - but "small" just isn't one of them.
0: https://docs.scala-lang.org/tour/tour-of-scala.html#what-is-...
The whole point of rusts type system is to try to ensure safe memory usage.
Opinions are opinions, but if I’m letting my runtime handle memory for me, I’d want a lighter weight, more expressive type system.
But I don’t think it prevents any more logic bugs than any other type system that requires all branches of match and switch statements to be implemented. (Like elm for example)
I do also believe this might be a sweet spot for a language, but the details might be hard to reconcile.
Edit: I would also prefer shared nothing parallelism by default so the GC can stay purely single threaded.
It isn't though. The whole trait system is unnecessary for this goal, yet it exists. ADTs are unnecessary to this goal, yet they exist. And many of us like those aspects of the type system even more than those that exist to ensure safe memory usage.
I think traits muddy that goal, personally, but their usefulness outweighs the cost (Box<dyn ATrait>)
I should’ve probably said “the whole point of rusts type system, other than providing types and generics to the language”
But I thought that went without saying
It ... just ... isn't, though.
I mean, I get what you're saying, it's certainly foundational, Rust would look incredibly different if it weren't for that goal. But it just isn't the case that it is "the first and foremost goal of every language choice in rust".
I followed the language discussions in the pre-1.0 days, and tons of them were about making it easier and more ergonomic to create correct-if-it-compiles code, very often in ways that had zero overlap with safe memory usage.
Traits don't "muddy that goal", they are an important feature of the language in and of themselves. Same thing with the way enums work (as arithmetic data types), along with using Option and Result for error handling, rather than exceptions. Same thing with RAII for tying the lifecycle of other resources to the lifecycle of values.
The memory safety features interact with all these other features, for sure, and that must be taken into account. But there are many features in the language that exist because they were believed to be useful on their own terms, not in subservience to safe memory usage.
And it's not just about "providing types and generics to the language", it's a whole suite of functionality targeted at static correctness and ergonomics. The ownership/lifetime/borrowing system is only one (important!) capability within that suite.
Lifetimes elision works pretty well so you don't often need to specify lifetimes
It usually pops up when you use generics / traits (what concrete type does it match to?)
Same for Box, but in fact Rust went the opposite way and turfed the Box ~ sigil.
Which I actually feel was a mistake, but I'm no language designer.
Aaaah, I'm realizing in typing this that the `@foo` syntax was actually implemented via reference counting? I think my intuition at the time was that the intention was for those to eventually be backed by a mark-and-sweep GC, which I did think was a poor fit for the rest of the language. But as just a syntax for reference counting, I honestly think it might have been an ok fit.
Or maybe not, I'm ambivalent. But the syntax thing in my comment is more of a red herring for what I think is more of a cultural "issue" (to the small extent it is an issue at all), which is that most Rust projects and programmers seem to try to write in a style that defaults to only choose reference counting when they must, rather than using a style of optimizing them out if they show up in a hotspot during profiling.
Regardless of the specifics here, the same problems apply. Namely that it privileges specific implementations, and makes allocation part of the language.
It wouldn't be a good fit for projects like the ones at Oxide :) I'm very glad Rust itself exists, with good support for use cases like those!
You might have to lose a few parens though!
For me it seems like the perfect match.
https://pcwalton.github.io/_posts/2013-06-02-removing-garbag...
That has changed through the years: https://graydon2.dreamwidth.org/307291.html
Well, since you can't really use without high adoption even if something comes up with all features you want, you still won't be able to use it for decades or longer.
There is Rune, but like you mentioned the issue is adoption, etc.
Whenever I've had to write kotlin for Android in the past I did quite enjoy it. It seems like the entire ecosystem is very enterprise-y when it comes to web though. Forced adherence to object orientedness and patterns like 100 files, 5 folders deep with 10 lines of code each keep cropping up in most kotlin projects I've seen.
Most people don't. That's not the fun part of language design.
Imperative code with functional constructs seems like the most workable approach to me, which rust, go, and other languages like kotlin, crystal etc. all offer.
Also, syntax does actually matter, because it's the first thing people see, and many people are immediately turned off by unfamiliarity. Rust's choice to largely "look like" c++/java/go was a good one, for this reason.
But I like ocaml both in theory and practice (also in part due to having my eyes opened to SML about 20 years ago).
Still with OCaml finally supporting multicore and still getting active interest, I often ponder going back and starting a project in it someday. I really like what I see with MirageOS.
These days I just work in Rust and it's Ok.
Gradual typing is interesting, but I wonder if necessary. Static typing doesn't have to feel like a burden and could make it hard to reason about performance. I think more type inference would be better than gradually typed (like OCaml/ML).
The fundamental concurrency system is green threads (similar to Go), which makes for a fantastic programming model where you spend your time writing linear blocking code, while actually having full parallelism. This is achieved both with the VM and the abstractions built on top like GenServers.
The Task module is a convenience that allows you to do "await" type work when that makes sense - because (as you describe) sometimes it does.
Open question: Are there any languages that can be used in a (decent [1]) REPL, that are strongly typed, but do not have Hindley–Milner-based type inference?
We have multiple concrete proofs that you can have a REPL with Hindley-Milner inference, but I'm curious if this is perhaps a concession to the difficulty of a strongly-typed REPL without a deeply inferable type system. But it's just an idle musing I'm throwing out to see the response to.
[1]: That is, for example, multiple people have put a Go REPL together, but anyone who has used a "real" REPL from the likes of Lisp, Haskell, Erlang, O'Caml, Python, etc., will not find it a "decent" REPL, as Go just can't have one for various reasons.
That can make a REPL sort of semantically weird. Do you allow variable declarations? If so, are they local or global? Do you allow class declarations in them? If so, can they access previous declarations?
All of that's easier in a dynamically typed language where the top level of a program is more or less just regular imperative code.
It's not insurmountable though, because you can be a little hand-wavey about semantics in a REPL if needed.
Code in general is hard for me to mentally read. I know it sounds nitpicky, but to me all keywords should be obviously pronounceable, so something like "func" instead of "fn" would be mandatory. Also, using the permission keywords where I'd expect the type to be also seems a bit strange, as I'd imagine that keyword to prefix the variable -- that's just how I think though.
It does seem like less decorator magic and symbol-based syntax would make it easier for beginners to grasp.
I may sound like a curmudgeon, but I'd prefer only one type of language innovation at a time.
Cutting my teeth on Schemes and MLs and now working in Python, I have the complete opposite experience. It's jarring to have to specify return. What else would I want to do at the end of an expression? It seems tautological. The real reason it's there in Python is early return, which is even more dangerous and jarring.
When I looked at rust code before, it all seemed a bit weird. I couldn’t immediately understand it, but I’ve since come to realize this was because the dozen or so languages I can read well don’t really resemble rust, so my pattern matching was a bit off.
The more I learn about the syntax and core concepts, the more I’m learning that my brain absolutely loves it. Once I started to understand matches, lifetime syntax and the core borrowing mechanics, things clicked and I’m more excited about writing code than I’ve been since I taught myself GW-BASIC 25 years ago.
Just sharing this anecdote because I find it interesting how differently people experience languages. I also have an ongoing friendly debate with a friend who absolutely hates Python, while I rather enjoy it. I’ve tried to understand why he hates it, and he’s tried to understand why I like it. And it all seems to come down to hard-to-define things that just rub us in different ways.
I hope the benefits of rust find their way into more types of languages in the future.
For instance, I'm fine to write C++, Javascript or Python (with types at least). Ruby or Rust for some reason do rub me the wrong way, no matter how much I try to tough it out.
I've always really struggled with the various Lisp variants.
Have you tried Ada?
> so something like "func" instead of "fn" would be mandatory.
What about no keywords, like:
x => ...func body
I have tried Pascal in that sphere, which was on the too verbose side.
Arrow notations like in JS/Typescript are fine to parse for me. Some clear symbols are actually easier to read than an unpronounceable alphanumeric.
(* Pascal *)
if a > b then
begin
blah;
blah;
end
else
begin
blah;
blah;
end;
-- Ada
if a > b then
blah;
blah;
else
blah;
blah;
end if;
// C
if (a > b) {
blah();
blah();
} else {
blah();
blah();
}
Pascal is clearly very verbose here, but the other two are pretty similar.That said I think that punctuation to delimit blocks makes more sense because it makes program structure clearly distinct. Although by the same token I think I'd prefer conditionals and loops to also be symbolic operators, so that there are no keywords inside executable code, only identifiers. Basically something like "?" instead of "if", "@" instead of "while" etc.
OCaml is nice in that "begin" and "end" are aliases for "(" and ")", so you can use whichever one makes the most sense for readability.
Anyway, the point is that the way to go is either 1) make structured programming constructs implicitly compound, so that you don't need braces for multi-statement bodies, or 2) make those braces as compact as possible so that being explicit is not so painful.
I hear you. Internally, I always pronounced "var" as rhymes with "care", but then a colleague pronounced it "var" as rhymes with "car". I think the same guy pronounced "char" like char-broiled, whereas I had thought of it like "care". And he would say "jay-SON" for json, which I pronounced like Jason.
How would you feel about a notation that is not meant to be pronounced at all?
+Employee {
}
where + indicates a class definition.
:Rename() where : indicates a class method.
~DoStuff() where ~ indicates a static function
Keywords only? How about function names like strspn or sbrk? And how do you feel about assembly language, using mnemonics like fsqrt or pcmpeqd?
BTW, thinking about it, I notice, that I need all these lexemes to be pronounceable too, and I have my ways to pronounce sbrk or pcmpeqd. Probably if I do it aloud no one will understand me, but it doesn't matter because these pronunciations are for internal use only.
I do consider the lightning start-up speed of a program to be one of the killer features of Rust. Rust with garbage collection throws away one of it's biggest advantages compared to every other language around.
The slow startup you associate with GC language implementations like ones for Java and JavaScript mostly comes from JIT warmup.
As easy as JavaScript to write, as fast as Rust when the extra effort to write it justifies it.
But perhaps it's a viable "training wheels" approach for getting used to borrow-checker friendly patterns? And I guess a scripting interpreter option that is fully rust-aware in terms of lifetimes could be truly golden for certain use cases, even if it turns out to be completely hostile to users not fully in tune with the underlying Rust. Sometimes "no recompile" is very important.
I wonder if the genesis story of the project might be hidden in "Dada has a required runtime": perhaps it started with the what-if of "how nice could we make Rust if we abandoned our strict "no runtime!" stance and went for making it runtime-heavy like e.g. Scala"? Then the runtime pulls in more and more responsibility until it's easier to consume a raw AST and from there it's not all that far to making types optional.
> if latency matters you end up having to care about (de)allocation patterns at the app level.
Yes, and you want tools that allow you to precisely describe your needs, which might be more difficult if a lumbering brute is standing between you and your data.
Most languages don't use reference counting because most applications are either one-shot console apps, GUI apps, or web apps - the latter two operate on "bursts" of input. If your app operates in "bursts" then a tracing GC is superior since you can delay collection until the app is waiting for more input. Real-time apps don't have a moment where they "wait" therefore they should prefer reference counting for its predictable performance.
You have to use the right GC algorithm for the right job, but unfortunately programming language runtimes don't usually offer a choice.
So don't do that then? Put most things on the stack. It's far faster than any allocation.
Rusts “difficulty” stems from its single ownership model, and this model is “different” not “easier”.
JS has plenty of bad parts you shouldn't use. Classes are the main one.
This isn't meant to be an attack on javascript as a worthwhile tool to learn, by the way, just a testament to the fact that it's not an easy language to master in the slightest.
return
{ ... }
And JS helpfully inserted a semi-colon after returnThis is a feature you need to know about and you have to go out of your way not to get rekt by it
Here is the original quote:
I speak only of myself since I do not wish to convince, I have no right to drag others into my river, I oblige no one to follow me and everybody practices his art in his own way, if be knows the joy that rises like arrows to the astral layers, or that other joy that goes down into the mines of corpse-flowers and fertile spasms.
changed to :
I speak only of myself since I do not wish to convince, I have no right to drag others into my river, I oblige no one to follow me and everybody practices their art their own way.
dada-lang about: https://dada-lang.org/docs/about/ Tzara, Dada Manifesto 1918: https://writing.upenn.edu/library/Tzara_Dada-Manifesto_1918....
[...]
# This function is declared as `async` because it
# awaits the result of print.
async fn print_point(p) {
# [...]
print("The point is: {p}").await
}
[...]
From the first page of the tutorial:> Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.
So, what it boils down to is that async/await are like lazily computed values (they work a bit like the lazy/force keywords in Ocaml for instance, though async seems to be reserved for function declarations). If that is the case, that method "print_point" is forcing the call to print to get that thunk evaluated. Yet, the method itself is marked async, which means that it would be lazily evaluated? Would it be the same to define it as:
fn print_point(p) {
print("The point is: {p}")
}
If not, what is the meaning of the above? Or with various combinations of async/await in the signature & body? Are they ill-typed?I wish they'd provide a more thorough explanation of what await/async means here.
Or maybe it is a dadaist[0] comment?
What this means, concretely, in Rust, is `.await` will return the thunk to the caller, and the caller should resume the async function when the result is ready. Of course the caller can await again and push the responsibility further back.
The most important thing here, is that `.await` yields the control of execution. Why does this matter? Because IO can block. If control wasn't given up, IO will block the whole program; if it is, then something else will have a chance to run while you wait.
In other words, print produces a thunk, and print_point also produces a thunk, and when await is used on the later, it is executed asynchronously, which will execute the print also asynchronously. So we end up with 3 different execution context: the main one, a one for each "await"?
What is the point of this, as opposed to executing the thunk asynchronously right away? Also, how does one get the result?
From the point of view of print_point await executes the thunk synchronously, print_point execution stops and awaits for print to finish it work. But a callee of print_point might want it to run print_point asynchronously, so print_point is an async fn, and callee can do something more creative then to await.
So, it seems I had understood the principle the way you explain it, but the code comment on print_point (as I indicated in the top of this thread) isn't saying that.
That would be Swift?
Interesting experiment. But it does seem like there are increasing numbers of languages trying to crowd into the same spaces.
I don't think it will be, it sounds like a concept of similar complexity and it won't make it an "easy language".
People are scared of Typescript, so a typed language with an extra ownership concept will sound exactly like rust in terms of difficulty.
Not that I get the reputation of Rust being hard, even as a complete novice I was able to fight a bit with the compiler and get things working.
The gradually typed approach is nice but it just sounds like smarter type inference would get you 99% there while keeping the performance (instead of using runtime checks).
Not having unsafe code is both interesting and limiting. I keep all my code safe for my own mental sanity but sometimes having bindings to some big library in c/c++ is convenient (eg Qt or OpenCV).
It's dynamically typed and uses lifetimes instead of a garbage collector.
JavaScript (new) is +++2, and ++3 (to me). Java is +++1 & --2, -3.
Personally I like OO ("has a") but think Class-ification,("is a") is a big mistake. Take a truck and a car. Start replacing the pieces of the car with pieces from the truck. When is a car not a car? Arbitrary. When does the car have a tail gate, a flat bed?
That is not a joke. Classes and Types are a way to think (Sapir Whorf) that makes you do strange things.
The interesting thing about Dada is the "borrow", "share" etc and seems very good. But then instead of wrapping it in a class can't we just use an Object?
If you are making a new programming language, please do us a favor and put your Hello World syntax example right on the landing page.
Rust is great but being an early adopter has made its usability imperfect in places. Combining substructural typing with gradual typing and OOP is interesting here. Others in this thread have also mentioned wanting a higher-level Rust, like Go. I'd like to see a purely functional Rust. Haskell has experimental support for linear typing[1], but I suspect a language built with it from the ground up would be very different.
[0]: https://verdagon.dev/blog/higher-raii-7drl
[1]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/line...
I'm coming from Python, and for situations when people grasp for C/++ kind of performance and control, I think people are aware of the need for high performance memory safe languages that are easier to use than Rust but with many of Rust's benefits being at least possible. So I am quite excited by thinking from Dada and the people who are behind Rust and I'm also intrigued by SerenityOS's Jakt language project. I hope the insecure "C code problem" has a smooth migration path that let's C/++ devs, Typescript devs, and others make progress quickly in a powerful way. What other sort of alternative languages are there, among Dada's aspirations? Jakt? Vale (I understand a lead dev is poorly, so it's slowed a bit lately)? D? Go? Obviously AI will have a big impact. What language is going to have a big impact in this space?
Otherwise, the idea of creating something close to rust but without the complexity sounds interesting. I just hope they don't stick to that name.
I wonder what that does...
> OK, from here on out I'm going to pretend that Dada really exists in its full glory.
This is a brilliant trick I only recently discovered in another context: write the docs first, to validate the user experience of a novel system.
All too often, the engineering has started at "customers want to be able to do $x", and that's the last time the customer was part of the consideration. The solutions are great, but often miss out on what it'd be like to actually use it, as a customer. Lots of foot guns, and expectations of knowledge that a customer couldn't possibly have unless they had as much understanding of what happens under the hood as the engineers did, etc.
Feel free to experiment on the syntax, but the concept is amazing, especially if you're planning on being dynamic-ish.
Are classes cool again?
Perhaps his most famous piece is a photo of Hitler captioned "millions stand behind me," showing a donor passing him stacks of cash.
https://graydon2.dreamwidth.org/307291.html
https://www.reddit.com/r/rust/comments/7qels2/i_wonder_why_g...
> I speak only of myself since I do not wish to convince, I have no right to drag others into my river, I oblige no one to follow me and everybody practices their art their own way.
> Tristan Tzara, "Dada Manifesto 1918”
I've probably written 100s of tiny little utility programs that are a couple of lines at most, and wouldn't need types for any of those, it would just add extra verbosity for no gain.
In garbage-collected languages, please give me gradual / optional annotations that permit deterministic fast freeing of temps, in code that opts in.
Basically to relieve GC pressure, at some modest cost of programmer productivity.
This unfortunately makes no sense for small bump-allocated objects in languages with relocating GC, say typical java objects. But it would make a lot of sense even in the JVM for safe eager deterministic release of my 50mb giant buffers.
Another gradual lifetime example is https://cuda.juliagpu.org/stable/usage/memory/ -- GPU allocations are managed and garbage collected, but you can optionally `unsafe_free!` the most important ones, in order to reduce GC pressure (at significant safety cost, though!).
> Updated to use modern pronouns.
I would say that that their updated quote is a more accurate translation of the original than the English translation they initially used.
As long as the meaning of the quote isn't changed I couldn't care less and it seems very important to some people.
What I personally dislike though is the whole "Ask me my pronouns" thing... like "No, I don't care about your gender or sex, as long as I am not interested in a romantic relationship with you - just tell me how to call you and I'll do it, but more effort? No!"
To elaborate a bit more: I find the topic exhausting not because I hate freedom of choosing your own gender or anything like that, but because I personally do not care about your gender at all.
I don't care about your religion, your skin color, your culture, your sex, your gender... I care about individual people but I don't reduce them to a certain aspect of their existence.
Now I find the whole "Ask me my pronouns" exhausting and also rude because it puts pressure on me to ask you about a topic I am not interested in. Like: I get it, there is social pressure, I understand that you're not happy with certain social "norms" and developments. I totally get that and I guess we are on the same side for many of them, but I still do not care about your gender until I care about your gender. (And also, I don't live in your country probably, so your local politics may be of interest, but I still don't like being forced to talk about them before I can ask a genuine question on e.g. a technology topic ;))
Just write his/her/theirs... and I will respect your choice. I will not think less of you, nor will I put you on a pedestal for something I do not care about.
The wikipedia article is quite detailed and will probably supply more information that anyone particularly wanted. https://en.wikipedia.org/wiki/Singular_they
Non-native speaker too, I find it easier to adjust in English compared to my native language (French), probably because the language is less engrained in me. I embraced the English neutral pleural - it's even convenient - but I found myself a bit more annoyed with the so called French *écriture inclusive", such as "les étudiant.e.s sont fatigué.e.s". Not really pretty IMHO. We could find something better..
Its been done before. See royal plural
And yet here I am, N levels down in this thread, griping about it. Oops.
In my native language it is quite old-school. Really polite form.
I predict this project will have its priorities backwards. There's a group of people who want to govern a programming language project, and inject their ideology into that structure, and maybe there's another group of avid language designers in there too. I think there are more of the first.
Compiler error if the variable name is sexist?
I was just talking about the project community and governance. It would be hard to imagine injecting ideology into the language itself.
Oh wait, nevermind...
https://doc.rust-lang.org/beta/nightly-rustc/tidy/style/cons...
What makes you so reliant on significant white space that any language without is a automatic dismissal?