The io interface looks like OO but violates the Liskov substitution principle. For me, this does not solve the function color problem, but instead hides it. Every function with an IO interface cannot be reasoned about locally because of unexpected interactions with the io parameter input. This is particularly nasty when IO objects are shared across library boundaries. I now need to understand how the library internally manages io if I share that object with my internal code. Code that worked in one context may surprisingly not work in another context. As a library author, how do I handle an io object that doesn't behave as I expect?
Trying to solve this problem at the language level fundamentally feels like a mistake to me because you can't anticipate in advance all of the potential use cases for something as broad as io. That's not to say that this direction shouldn't be explored, but if it were my project, I would separate this into another package that I would not call standard.
Hopefully I'm wrong and it's wildly successful. Time will tell I guess.
> As a library author, how do I handle an io object that doesn't behave as I expect
you ship with tests against the four or five default patterns in the stdlib and if anyone wants to do anything substantially crazier to the point that it doesnt work, thats on them, they can submit a PR and you can curbstomp it if you want.
> function coloring
i recommend reading the function coloring article. there are five criteria that describe what make up the function coloring problem, it's not just that there are "more than one class of function calling conventions"
Isn't this just as true of any function using io in any other language?
> As a library author, how do I handle an io object that doesn't behave as I expect?
But isn't that the point of having an interface? To specify how the io object can and can't behave.
xBase, Clipper, Perl, Tcl upvars
For something like Zig, it would make sense to go one step further and require them to be declared to be passed, i.e. no "tunneling" through interleaving non-Io functions. But it could still automatically match them e.g. by types, so that any argument of type Io, if marked with some keyword to indicate explicit propagation, would be automatically passed to a call that requires one.
For instance it is where Rust goes to die because it subverts the stack-based paradigm behind ownership. I used to find it was fun to write little applications like web servers in aio Python, particularly if message queues and websockets were involved, but for ordinary work you're better off using gunicorn. The trouble is that conventional async i/o solutions are all single threaded and in an age where it's common to have a 16 core machine on your desktop it makes no sense. It would be like starting a chess game dumping out all your pieces except for your King.
Unfashionable languages like Java and .NET that have quality multithreaded runtimes are the way to go because they provide a single paradigm to manage both concurrency and parallelism.
First, that would be Java and Go, not Java and .NET, as .NET offers a separate construct (async/await) for high-throughput concurrency.
Second, while "unfashionable" in some sense, I guess, it's no wonder that Java is many times popular than any "fashionable" language. Also, if "fashionable" means "much discussed on HN", then that has historically been a terrible predictor of language success. There's almost an inverse correlation between how much a language is discussed on HN and its long-term success, and that's not surprising, as it's the less commonplace things that are more interesting to talk about. HN is more Vogue magazine than the New York Times.
Yes, sadly Java and .NET are unfashionable in circles like HN, and recent SaaS startups, I keep seeing products that only offer nodejs based SDKs, when they offer Java/.NET SDKs they are generaly always outdated verus the nodejs one.
At the cost of not being able to actually provide the same throughput, latency, or memory usage that lower level languages that don't enforce the same performance pessimizing abstractions on everything can. Engineering is about tradeoffs but pretending like Java or .NET have solved this is naiive.
Only memory usage is true with regards to Java in this context (.NET actually doesn't offer a shared thread abstraction; it's Java and Go that do), and even that is often misunderstood. Low-level languages are optimised for minimal memory usage, which is very important on RAM-constrained devices, but is could be wasting CPU on most machines: https://youtu.be/mLNFVNXbw7I
This optimisation for memory footprint also makes it harder for low-level languages to implement user-mode threading as efficiently as high-level languages.
Another matter is that there are two different use cases for asynchronous constructs that may tempt implementors to address them with a single implementation. One is the generator use case. What makes it special is that there are exactly two communicating parties, and both of their state may fit in the CPU cache. The other use case is general concurrency, primarily for IO. In that situation, a scheduler juggles a large number of user-mode threads, and because of that, there is likely a cache miss on every context switch, no matter how efficient it is. However, in the second case, almost all of the performance is due to Little's law rather than context switch time (see my explanation here: https://inside.java/2020/08/07/loom-performance/). That means that a "stackful" implementation of user-mode threads can have no significant performance penalty for the second use case (which, BTW, I think has much more value than the first), even though a more performant implementation is possible for the first use case. In Java we decided to tackle the second use case with virtual threads, and so far we've not offered something for the first (for which the demand is significantly lower). What happens in languages that choose to tackle both use cases with the same construct is that in the second and more important use case they gain no more than negligible performance (at best), but they're paying for that with a substantial degradation in user experience.
For example, the best frameworks on TechEmpower are all Rust, C and C++ with the best Java coming in at 25% slower on that microbenchmark. My point stands - it is generally true that well written rust/c/c++ outperforms well written Java and .Net and not just with lower memory usage. The “engineering effort per performance” maybe skews to Java but that’s different than absolute performance. With rust to me it’s also less clear if that is actually even true.
[1] https://www.techempower.com/benchmarks/#section=data-r23
It doesn't matter to win benchmarks games if the customer doesn't get what they need, but runs at blazing speed.
And remember, we’re talking about a very niche and specific I/O microbenchmark. Start looking at things like SIMD (currently - I know Java is working on it) or in general more compute bound and the gap will widen. Java still doesn’t yet have the tools to write really high performance code.
Too many people go hard on must be 100% pure, meanwhile Python is taking over the AI world, via native library bindings.
Modern Java GCs typically offer a boost over more manual memory management. And on latency, even if virtual were very inefficient and you'd add a GC pause with Java's new GCs, you'd still be well below 1ms, i.e. not a dominant factor in a networked program.
(Yes, there's still one cause for potential lower throughput in Java, which is the lack of inlined objects in arrays, but that will be addressed soon, and isn't a big factor in most server applications anyway or related to IO)
BTW, writing a program in C++ has always been more or less as easy as writing it in Java/C# etc.; the big cost of C++ is in evolution and refactoring over many years, because in low-level languages local changes to code have a much more global impact, and that has nothing to do with the design of the language but is an essential property of tracking memory management at the code level (unless you use smart pointers, i.e. a refcounting GC for everything, but then things will be really slow, as refcounting does sacrifice performance in its goal of minimising footprint).
Ironically, Java has okay performance for pure computation. Where it shows poorly is I/O intensive applications. Schedule quality, which a GC actively interferes with, has a much bigger impact on performance for I/O intensive applications than operation latency (which can be cheaply hidden).
Who said anything about a 1ms pause? I said that even if virtual thread schedulers had terrible latencies (which they don't) and you added GC pauses, you'd still be well below 1ms, which is not an eternity in the context of network IO, which is what we're talking about here.
Modern gcs can be pauseless, but either way you’re spending CPU on gc and not servicing requests/customers.
As for c++, std::unique_ptr has no ref counting at all.
shared_ptr does, but that’s why you avoid it at all costs if you need to move things around. you only pay the cost when copying the shared_ptr itself, but you almost never need a shared_ptr and even when you need it, you can always avoid copying in the hot path
Since memory is finite and all computation uses some, every program spends CPU on memory management regardless of technique. Tracing GCs often spend less CPU on memory management than low-level languages.
> std::unique_ptr has no ref counting at all.
It still needs to do work to free the memory. Tracing GCs don't. The whole point of tracing GCs is that they spend work on keeping objects alive, not on freeing memory. As the size of the working set is pretty much constant for a given program and the frequency of GC is the ratio of allocation rate (also constant) to heap size, you can arbitrarily reduce the amount of CPU spent on memory management by increasing the heap.
Promises work great in javascript, either in the browser or in node/bun. They're easy to use, and easy to reason about (once you understand them). And the language has plenty of features for using them in lots of ways - for example, Promise.all(), "for await" loops, async generators and so on. I love this stuff. Its fast, simple to use and easy to reason about (once you understand it).
Personally I've always thought the "function coloring problem" was overstated. I'm happy to have some codepaths which are async and some which aren't. Mixing sync and async code willy nilly is a code smell.
Personally I'd be happy to see more explicit effects (function colors) in my languages. For example, I'd like to be able to mark which functions can't panic. Or effects for non-divergence, or capability safety, and so on.
Python added them in 3.7: https://docs.python.org/3/library/contextvars.html
Performant thread-local variables require ahead-of-time mapping to a 1-or-2-level integer sequence with a register to quickly the base array, and some kind of trap to handle the "not allocated" case. Task-local variables are worse than thread-locals since they are swapped out much more frequently.
This requires special compiler support, not being a mere library.
In .NET they do virtual dispatch via a very basic map-like interface that has a bunch of micro-optimized implementations that are swapped in and out as needed if new items are added. For N up to 4 variables, they use a dedicated implementation that stores them as fields and does simple branching to access the right one, for each N. Beyond that it becomes an array, and at some point, a proper Dictionary. I don't know the exact perf characteristics, but FWIW I don't recall that ever being a source of an actual, non-hypothetical perf problem. Usually you'll have one local that is an object with a bunch of fields, so you only need one lookup to fetch that, and from there it's as fast as field access.
I can't disagree more. They suffer from the same stuff rust async does: they mess with the stack trace and obscure the actual guarantees of the function you're calling (eg a function returning a promise can still block, or the promise might never resolve at all).
Personally I think all solutions will come with tradeoffs; you can simply learn them well enough to be productive anyway. But you don't need language-level support for that.
These are inconveniences, but not show stoppers. Modern JS engines can "see through" async call stacks. Yes, bugs can result in programs that hang - but that's true in synchronous code too.
But async in rust is way worse:
- Compilation times are horrible. An async hello world in javascript starts instantly. In rust I need to compile and link to tokio or something. Takes ages.
- Rust doesn't have async iterators or async generators. (Or generators in any form.) Rust has no built in way to create or use async streams.
- Rust has 2 different ways to implement futures: the async keyword and impl Future. You need to learn both, because some code is impossible to write with the async keyword. And some code is impossible to write with impl Future. Its incredibly confusing and complicated and its difficult to learn it properly.
- Rust doesn't have a built in run loop ("executor"). So - best case - your project pulls in tokio or something, which is an entire kitchen sink and all your dependencies use that. Worst case, the libraries you want to use are written for different async executors and ??? shoot me. In JS, everything just works out of the box.
I love rust. But async rust makes async javascript seem simple and beautiful.
generators, at least, are available on nightly.
But I haven’t heard anything about them ever moving to stable. Here’s to another 8 years!
> Modern JS engines can "see through" async call stacks.
I did not know that. I'll have to figure out how this works and what it looks like.
> Rust doesn't have async iterators or async generators. (Or generators in any form.) Rust has no built in way to create or use async streams.
This is not necessary. Library-level streams work just fine. Perhaps a "yield" keyword and associated compiler/runtime support would simplify this code, but this is not really a restriction for people willing to learn the libraries.
Rust has many issues, and so does its async keyword, but javascript is only obviously better if you want to use the tradeoffs javascript offers: an implicit and unchangeable async runtime that doesn't offer parallelism and relies on a jit interpreter. If you have cpu-bound code, or you want to ship a statically-compiled binary (or an embeddable library), this is not a good set of tradeoffs.
I find rust's tradeoffs to be worth the benefits—i literally do not care about compilation time and I internalized the type constraints many years ago—and I find the pain of javascript's runtime constraints to be not worth its simplicity or "beauty", although I admit I simply do not view code aesthetically. Perhaps we just prefer to tackle differently-shaped problems.
Yes - I certainly wouldn’t use JavaScript to compile and ship binaries to end users. But as an application developer, i think the tradeoffs it makes are pretty great. I want fast iteration (check!). I want all libraries in the ecosystem to just work and interoperate out of the box (check!). And I want to be able to just express my software using futures without worrying I’m holding them wrong.
Even in systems software I don’t know if I want to be picking my own future executor. It’s like, the string type in basically every language is part of the standard library because it makes interoperability easy. I wish future executors in rust were in std for the same reason - so we could stop arguing about it and just get back to writing code.
They basically stitch together a dummy async stack based on causality chain. It's not really a stack anymore since you can have a bunch of tasks interleaved on it which has to be shown somehow, but it's still nice.
It's also not JS specific. .NET has the same async model (despite also having multithreaded concurrency), and it also has similar debugger support. Not just linearized async stacks, but also the ability to diagram them etc.
https://learn.microsoft.com/en-us/visualstudio/debugger/walk...
And in profiler as well, not just the debugger. So it's entirely a tooling issue, and part of the problem is that JS ecosystem has been lagging behind on this.
I'd like to see some evidence for this. Other than simplicity, IMO there's very little reason to use synchronous Python for a web server these days. Streaming files, websockets, etc. are all areas where asyncio is almost a necessity (in the past you might have used twisted), to say nothing of the performance advantage for typical CRUD workloads. The developer ergonomics are also much better if you have to talk to multiple downstream services or perform actions outside of the request context. Needing to manage a thread pool for this or defer to a system like Celery is a ton more code (and infrastructure, typically).
> async i/o solutions are all single threaded
And your typical gunicorn web server is single threaded as well. Yes you can spin up more workers (processes), but you can also do that with an asgi server and get significantly higher performance per process / for the same memory footprint. You can even use uvicorn as a gunicorn worker type and continue to use it as your process supervisor, though if you're using something like Kubernetes that's not really necessary.
  var a = io.async(doWork, .{ io, "hard" });
  ...
  a.await(io);
If you give your business logic the complete message or send it a stream, then the flow of ownership stays much cleaner. And the unit tests stay substantially easier to write and more importantly, to maintain.
I know too many devs who don't see when they bias their decisions to avoid making changes that will lead to conflict with bad unit tests and declare that our testing strategy is Just Fine. It's easier to show than to debate, but it still takes an open mind to accept the demonstration.
(I use it from Clojure, where it pairs great with the "thread" version of core.async (i.e. Go-style) channels.)
Instead, everything is a job, and even what is considered the main thread is no longer an orchestration thread, but just another worker after some nominal set up between scaffolding enough threads, usually minus one, to all serve as lockless, work-stealing worker threads.
Conventional async programming relies too heavily on a critical main thread.
I think it’s been so successful though, that unfortunately we’ll be stuck with it for much longer than some of us will want.
It reminds me of how many years of inefficient programming we have been stuck with because cache-unfriendly traditional object-oriented programming was so successful.
One thing I like about the design is it locks in some of the "platforms" concepts seen in other languages (e.g. Roc), but in a way that goes with Zig's "no hidden control flow" mantra.
The downstream effect is that it will be normal to create your own non-posix analogue of `io` for wherever you want code to hook into. Writing a game engine? Let users interact with a set of effectful functions you inject into their scripts.
As a "platform" writer (like the game engine), essentially you get to create a sandbox. The missing piece may be controlling access to calling arbitrary extern C functions - possibly that capability would need to be provided by `io` to create a fool-proof guarantees about what some code you call does. (The debug printing is another uncontrolled effect).
Assume some code is trying to blink an LED, which can only be done with the led_on and led_off system calls. Those system calls block until they get an ack from the LED controller, which, in the worst case, will timeout after 10s if the controller is broken.
In e.g. Python, my function is either sync or async, if it's async, I know I have to go through the rigamarole of accessing the event loop and scheduling the blocking syscall on a background thread. If it's sync, I know I'm allowed to block, but I can't assume an event loop exists.
In Zig, how would something like this be accomplished (assuming led_on and led_off aren't operations natively supported by the IO interface?)
Feels like this violates zig “no hidden control flow” principle. I kinda see how it doesn’t. But it sure feels like a violation. But I also don’t see how they can retain the spirit of the principle with async code.
A hot take here is that the whole async thing is a hidden control flow. Some people noticed that ever since plain callbacks were touted as a "webscale" way to do concurrency. The sequence of callbacks being executed or canceled forms a hidden, implicit control flow running concurrently with the main control logic. It can be harder to debug and manage than threads.
But that said, unless, Zig adds a runtime with its own scheduler and turns into a bytecode VM, there is not much it can do. Co-routines and green threads have been done before in C and C-like languages, but not sure how easily the would fit with Zig and its philosophy.
In other languages, when the compiler sees an async function, it compiles it into a state machine or 'coroutine', where the function can suspend itself at designated points marked with `await`, and be resumed later.
In Zig, the compiler used to support coroutines but this was removed. In the new design, `async` and `await` are just functions. In the threaded implementation used in the demo, `await` just blocks the thread until the operation is done.
To be fair, the bottom of the post explains that there are two other Io implementations being planned.
One of them is "stackless coroutines", which would be similar to traditional async/await. However, from the discussion so far this seems a bit like vaporware. As discussed in [1], andrewrk explicitly rejected the idea of just (re-)adding normal async/await keywords, and instead wants a different design, as tracked in issue 23446. But in issue 23446 the seems to be zero agreement on how the feature would work, how it would improve on traditional async/await, or how it would avoid function coloring.
The other implementation being planned is "stackful coroutines". From what I can tell, this has more of a plan and is more promising, but there are significant unknowns.
The basis of the design is similar to green threads or fibers. Low-level code generation would be identical to normal synchronous code, with no state machine transform. Instead, a library would implement suspension by swapping out the native register state and stack, just like the OS kernel does when switching between OS threads. By itself, this has been implemented many times before, in libraries for C and in the runtimes of languages like Go. But it has the key limitation that you don't know how much stack to allocate. If you allocate too much stack in advance, you end up being not much cheaper than OS threads; but if you allocate too little stack, you can easily hit stack overflow. Go addresses this by allocating chunks of stack on demand, but that still imposes a cost and a dependency on dynamic allocation.
andrewrk proposes [2] to instead have the compiler calculate the maximum amount of native stack needed by a function and all its callees. In this case, the stack could be sized exactly to fit. In some sense this is similar to async in Rust, where the compiler calculates the size of async function objects based on the amount of state the function and its callees need to store during suspension. But the Zig approach would apply to all function calls rather than treating async as a separate case. As a result, the benefits would extend beyond memory usage in async code. The compiler would statically guarantee the absence of stack overflow, which benefits reliability in all code that uses the feature. This would be particularly useful in embedded where, typically, reliability demands are high and memory available is low. Right now in embedded, people sometimes use a GCC feature ("-fstack-usage") that does a similar calculation, but it's messy enough that people often don't bother. So it would be cool to have this as a first-class feature in Zig.
But.
There's a reason that stack usage calculators are uncommon. If you want to statically bound stack usage:
First, you have to ban recursion, or else add some kind of language mechanism for tracking how many times a function can possibly recurse. Banning recursion is common in embedded code but would be rather annoying for most codebases. Tracking recursion is definitely possible, as shown by proof languages like Agda or Coq that make you prove termination of recursive functions - but those languages have a lot of tools that 'normal' languages don't, so it's unclear how ergonomic such a feature could be in Zig. The issue [2] doesn't have much concrete discussion on how it would work.
Second, you have to ban dynamic calls (i.e. calls to function pointers), because if you don't know what function you're calling, you don't know how much stack it will use. This has been the subject of more concrete design in [3] which proposes a "restricted" function pointer type that can only refer to a statically known set of functions. However, it remains to be seen how ergonomic and composable this will be.
Zooming back out:
Personally, I'm glad that Zig is willing to experiment with these things rather than just copying the same async/await feature as every other language. There is real untapped potential out there. On the other hand, it seems a little early to claim victory, when all that works today is a thread-based I/O library that happens to have "async" and "await" in its function names.
Heck, it seems early to finalize an I/O library design if you don't even know how the fancy high-performance implementations will work. Though to be fair, many applications will get away just fine with threaded I/O, and it's nice to see a modern I/O library design that embraces that as a serious option.
[1] https://github.com/ziglang/zig/issues/6025#issuecomment-3072...
With a 64-bit address space you can reserve large contiguous chunks (e.g. 2MB), while only allocating the minimum necessary for the optimistic case. The real problem isn't memory usage, per se, it's all the VMA manipulation and noise. In particular, setting up guard pages requires a separate VMA region for each guard (usually two per stack, above and below). Linux recently got a new madvise feature, MADV_GUARD_INSTALL/MADV_GUARD_REMOVE, which lets you add cheap guard pages without installing a distinct, separate guard page. (https://lwn.net/Articles/1011366/) This is the type of feature that could be used to improve the overhead of stackful coroutines/fibers. In theory fibers should be able to outperform explicit async/await code, because in the non-recursive, non-dynamic call case a fiber's stack can be stack-allocated by the caller, thus being no more costly than allocating a similar async/await call frame, yet in the recursive and dynamic call cases you can avoid dynamic frame bouncing, which in the majority of situations is unnecessary--the poor performance of dynamic frame allocation/deallocation in deep dynamic call chains is the reason Go switched from segmented stacks to moveable stacks.
Another major cost of fibers/thread is context switching--most existing solutions save and restore all registers. But for coroutines (stackless or stackful), there's no need to do this. See, e.g., https://photonlibos.github.io/blog/stackful-coroutine-made-f..., which tweaked clang to erase this cost and bring it line with normal function calls.
> Go addresses this by allocating chunks of stack on demand, but that still imposes a cost and a dependency on dynamic allocation.
The dynamic allocation problem exists the same whether using stackless coroutines, stackful coroutines, etc. Fundamentally, async/await in Rust is just creating a linked-list of call frames, like some mainframes do/did. How many Rust users manually OOM check Boxed dyn coroutine creation? Handling dynamic stack growth is technically a problem even in C, it's just that without exceptions and thread-scoped signal handlers there's no easy way to handle overflow so few people bother. (Heck, few even bother on Windows where it's much easier with SEH.) But these are fixable problems, it just requires coordination up-and-down the OS stack and across toolchains. The inability to coordinate these solutions does not turn ugly compromises (async/await) into cool features.
> First, you have to ban recursion, or else add some kind of language mechanism for tracking how many times a function can possibly recurse. [snip] > > Second, you have to ban dynamic calls (i.e. calls to function pointers)
Both of which are the case for async/await in Rust; you have to explicitly Box any async call that Rust can't statically size. We might frame this as being transparent and consistent, except it's not actually consistent because we don't treat "ordinary", non-async calls this way, which still use the traditional contiguous stack that on overflow kills the program. Nobody wants that much consistency (too much of a "good" thing?) because treating each and every call as async, with all the explicit management that would entail with the current semantics would be an indefensible nightmare for the vast majority of use cases.
Maybe. a smart event loop could track how many frames are in flight at any given time and reuse preallocated frames when their frames dispatch out.
Being explicit about it might also allow for some interesting compiler optimizations across shared library boundaries...
> Tracking recursion is definitely possible, as shown by proof languages like Agda or Coq that make you prove termination of recursive functions
Proof languages don't really track how many times a function can possibly recurse, they only care that it will eventually terminate. The amount of recursive steps can even easily depend on the inputs, making it unknown at the moment a function is defined.
Before io_uring (which is disabled on most servers until it mature), there was no good way to do async I/O on file on Linux.
I wrote a library that I use for this but it would be really nice to be able to cleanly integrate it into async/await.
with the cleanup attribute(a cheap "defer" for C), and the sanitizers, static analysis tools, memory tagging extension(MTE) for memory safety at hardware level, etc, and a zig 1.0 still probably years away, what's the strong selling point that I need spend time with zig these days? Asking because I'm unsure if I should re-try it.
But if you're open to learning languages for tinkering/education purposes, I would say that Zig has several significant "intrinsic" advantages compared to C.
* It's much more expressive (it's at least as expressive as C++), while still being a very simple language (you can learn it fully in a few days).
* Its cross-compilation tooling is something of a marvel.
* It offers not only spatial memory safety, but protection from other kinds of undefined behaviour, in the form of things like tagged unions.
It also fixes a shitton of tiny design warts that we've become accustomed to in C (also very slowly happening in the C standard, but that will take decades while Zig has those fixes now).
Also, probably the best integration with C code you can get outside of C++. E.g. hybrid C/Zig projects is a regular use case and has close to no friction.
C won't go away for me, but tinkering with Zig is just more fun :)
I think this is because many parts of the stdlib are too object-oriented, for instance the containers are more or less C++ style objects, but this style of programming really needs RAII and restricted visibility rules like public/private (now Zig shouldn't get those, but IMHO the stdlib shouldn't pretend that those language features exist).
As a sister comment says, Zig is a great programming language, but the stdlib needs some sort of basic and consistent design philosophy whitch matches the language capabilities.
Tbf though, C gets around this problem by simply not providing a useful stdlib and delegating all the tricky design questions to library authors ;)
[1] https://github.com/smj-edison/zicl/blob/bacb08153305d5ba97fc...
    const Bla = struct {
        // public access intended
        bla: i32,
        blub: i32,
        // here be dragons
        _private: struct {
            x: i32,
            y: i32,
        },
    };
It could just be a compiler error/warning that has to be explicitly opted into to touch those fields. This allows you to say "I know this is normally a footgun to modify these fields, and I might be violating an invariant condition, but I am know what I'm doing".
As such, I'm happy to not have visibility modifiers at all.
I do absolutely agree that "std" needs a good design pass to make it consistent--groveling in ".len" fields instead of ".len()" functions is definitely a bad idea. However, the nice part about Zig is that replacement doesn't need extra compiler support. Anyone can do that pass and everyone can then import and use it.
> This allows you to say "I know this is normally a footgun to modify these fields, and I might be violating an invariant condition, but I am know what I'm doing".
Welcome to "Zig will not have warnings." That's why it's all or nothing.
It's the single thing that absolutely grinds my gears about Zig. However, it's also probably the single thing that can be relaxed at a later date and not completely change the language. Consequently, I'm willing to put up with it given the rest of the goodness I get.
Nobody is asking for Pizza * Weather && (Lizard + Sleep), that strawman argument to justify ints and floats as the only algebraic types is infuriating :(
I'd love to read a blog post about your Zig setup and workflow BTW.
    fn computeVsParams(rx: f32, ry: f32) shd.VsParams {
        const rxm = mat4.rotate(rx, .{ .x = 1.0, .y = 0.0, .z = 0.0 });
        const rym = mat4.rotate(ry, .{ .x = 0.0, .y = 1.0, .z = 0.0 });
        const model = mat4.mul(rxm, rym);
        const aspect = sapp.widthf() / sapp.heightf();
        const proj = mat4.persp(60.0, aspect, 0.01, 10.0);
        return shd.VsParams{ .mvp = mat4.mul(mat4.mul(proj, state.view), model) };
    }
What I would like to see though is a Ziggified version of the Clang extended vector and matrix extensions.
I just find it intellectually offensive that this extremely short-sighted line is drawn after ints and floats, any other algebraic/number types aren't similarly dignified.
If people had to write add(2, mul(3, 4)) etc for ints and floats the language would be used by exactly nobody! But just because particular language designers aren't using complex numbers and vectors all day, they interpret the request as wanting stupid abstract Monkey + Banana * Time or whatever. I really wish more Language People appreciated that there's only really one way to do complex numbers, it's worth doing right once and giving proper operators, too. Sure, use dot(a, b) and cross(a, b) etc, that's fine.
The word "number" is literally half of "complex number", and it's not like there are 1024 ways to implement 2D, 3D, 4D vector and complex number addition, subtraction, multiplication, maybe even division. There are many languages one can look to for guidance here, e.g. OpenCL[0] and Odin[1].
[0] OpenCL Quick Reference Card, masterpiece IMO: https://www.khronos.org/files/opencl-1-2-quick-reference-car...
[1] Odin language, specifically the operators: https://odin-lang.org/docs/overview/#operators
Async clearly works for many people, I do fully understand people who can't get their heads around threads and prefer async. It's wonderful that there's a pattern people can use to be productive!
For whatever reason, async just doesn't work for me. I don't feel comfortable using it and at this point I've been trying on and off for probably 10+ years now. Maybe it's never going to happen. I'm much more comfortable with threads, mutex locks, channels, Erlang style concurrency, nurseries -- literally ANYTHING but async. All of those are very understandable to me and I've built production systems with all of those.
I hope when Zig reaches 1.0 I'll be able to use it. I started learning it earlier this month and it's been really enjoyable to use.
Those are independent of each other. You can have async with and without threads. You can have threads with and without async.
The example code shown in the first few minutes of the video is actually using regular OS threads for running the async code ;)
The whole thing is quite similar to the Zig allocator philosophy. Just like an application already picks a root allocator to pass down into libraries, it now also picks an IO implementation and passes it down. A library in turn doesn't care about how async is implemented by the IO system, it just calls into the IO implementation it got handed from the application.
If you don’t want to use async/await just don’t call functions through io.async.
Wow. Do you expect anyone to continue reading after a comment like that?
If I had a web service using threads, would I map each request to one thread in a thread pool? It seems like a waste of OS resources when the IO multiplexing can be done without OS threads.
> last time I checked a lot of crates.io is filled with async functions for stuff that doesn't actually block.
Like what? Even file I/O blocks for large files on slow devices, so something like async tarball handling has a use case.
It's best to write in the sans-IO style and then your threading or async can be a thin layer on top that drives a dumb state machine. But in practice I find that passable sans-IO code is harder to write than passable async. It makes a lot of sense for a deep indirect dependency like an HTTP library, but less sense for an app
This is a bizarre remark
Async/await isn't "for when you can't get your head around threads", it's a completely orthogonal concept
Case in point: javascript has async/await, but everything is singlethreaded, there is no parallelism
Async/await is basically just coroutines/generators underneath.
Phrasing async as 'for people who can't get their heads around threads' makes it sound like you're just insecure that you never learned how async works yet, and instead of just sitting down + learning it you would rather compensate
Async is probably a more complex model than threads/fibers for expressing concurrency. It's fine to say that, it's fine to not have learned it if that works for you, but it's silly to put one above the other as if understanding threads makes async/await irrelevant
> The stdlib isn't too bad but last time I checked a lot of crates.io is filled with async functions for stuff that doesn't actually block
Can you provide an example? I haven't found that to be the case last time I used rust, but I don't use rust a great deal anymore
> makes it sound like you're just insecure
> instead of just sitting down + learning it you would rather compensate
Can you please edit out swipes like these from your HN posts? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
Your comment would be fine without those bits.
May be I just wish Zig dont call it async and use a different name.
Async-await in JS is sometimes used to swallow exceptions. It's very often used to do 1 thing at a time when N things could be done instead. It serializes the execution a lot when it could be concurrent.
    if (await is_something_true()) {
      // here is_something_true() can be false
    }
Similar side-effects happen in other languages that have async-await sugar.
It smells as bad as the Zig file interface with intermediate buffers reading/writing to OS buffers until everything is a buffer 10 steps below.
It's fun for small programs but you really have to be very strict to not have it go wrong (performance, correctness).
That being said, I don't understand your `is_something_true` example.
> It's very often used to do 1 thing at a time when N things could be done instead
That's true, but I don't think e.g. fibres fare any better here. I would say that expressing that type of parallel execution is much more convenient with async/await and Promise.all() or whatever alternative, compared to e.g. raw promises or fibres.
`is_something_true` is very simple, if condition is true, and then inside the block, if you were to check again it can be false, something that can't happen in synchronous code, yet now, with async-await, it's very easy to get yourself into situations like these, even though the code seems to yell at you that you're in the true branch. the solution is adding a lock, but with such ease to write async-await, it's rarely caught
The idea of generalizing threads for use in parallel computing/SMP didn't come until at least a decade after the introduction of threads for use as a concurrency tool.
Wasn't this only true when CPUs were single core only? Only when multi core CPUs came, true parallelism could happen (outside of using multiple CPUs)
I guess they didn't get a release from the question asker and so they edited it out?
The problem here though is that the presenter didn't repeat the question for the audience which is a rookie mistake.
To perform what we normally call "pure" code requires an allocator but not the `io` object. Code which accepts neither is also allocation-free - something which is a bit of a challenge to enforce in many languages, but just falls out here (from the lack of closures or globally scoped allocating/effectful functions - though I'm not sure whether `io` or an allocator is now required to call arbitrary extern (i.e. C) functions or not, which you'd need for a 100% "sandbox").
Text version.
The desynced video makes the video a bit painful to watch.