I'd like to clarify a little what the OTP is vs the BEAM - this article does an OK job at that explanation but confuses it a little.

The BEAM is the underlying VM used by Erlang, Elixir and Gleam. The BEAM provides the basic primitives like spawning processes, sending messages, handling messages etc. Processes are lightweight pre-emptively scheduled tasks, similar to go-routines or green-threads in other languages. These primitives are mostly lower level than you want to deal with on a day to day basis.

The OTP is a standard library built on top of that, to provide a convenient, battle-tested way of building systems. The GenServer is the most well known component - it simplifies writing code that handles messages one by one, updating held state, and sending replies. And bits around it to the caller, sending that message and getting to the reply is just like making a function call. GenServers are then managed by Supervisors that know what to restart when something crashes.

One notable difference between Elixir and Gleam is that Elixir gets to just re-use the OTP code as-is (with some Elixir wrappers on top for convenience). Gleam concluded that the OTP is built expecting dynamic types, and that for best results in Gleam they'd need to re-implement the key primitives. That's why the example shown is an "Actor" not a GenServer - it serves the same purpose, and might even fit in a Supervision tree, but isn't actually a GenServer.

Thanks a lot for the missing context! Explaining OTP is not an easy task!!!
Every time I see gleam examples I'm filled with calm, it seems like a very nice language to use.

I'm a data engineer so unless I feel like doing a lot of library development, I'll probably wait on the data ecosystem to advance a little bit more.

I hope it does some day, the BEAM feels like the perfect place for a distributed dataframe library to live.

Same here. Data Eng by day but looking to move away from Python but Go and Rust i dunno doesn't feel like a good replacement for it plus the Python ecosystem in the data space is huge.
  • oDot
  • ·
  • 3 months ago
  • ·
  • [ - ]
Gleam is a very fun language to write in, and no need to start a project from scratch to explore it

I wrote Vleam to help incorporate Gleam into a Vue project, if you have one already

https://github.com/vleam/vleam

Oh wow. Does the gleam LSP still work alongside typescripts in VSCode when editing Gleam scripts in Vue files?

If so… I’d love a Svelte version of this.

  • oDot
  • ·
  • 3 months ago
  • ·
  • [ - ]
It does! Uses the `lang` tag to check which one is it
What I find very fun about Gleam is its minimalism. You've got functions, types, a case expression, and modules to organize them. That's it. No inheritance, no methods, no ifs, no loops, no early returns, no macros. And yet it's fairly productive and ergonomic to write.

For the most part, Gleam feels like it has gathered the best ideas from some interesting langs (perhaps a chimera of Rust, Go, ML and Lisp) and put them into one coherent, simple language. It's not that the ideas are new, they're just very well organized.

Gleam's labelled arguments are AFAIK a unique feature. (Edit: Nope see comments below) This lets you have named function argument appear differently to the caller than they do internally. The caller may way to refer by verb ("multiply_by") and the function may want to refer to it as a noun ("multiplier").

The `use <- ...` expression and pipeline operator `|>` can reduce boilerplate with nested function calls in a lot of cases. It's a nod to the ergonomic benefits of metaprograming without giving the user a full macro system.

Finally Gleam's tooling is the best in the business, hands down. The entire language, the compiler, package manager, LSP, etc, is all built into a single binary with an ergnomic CLI. Compared that to cobbling together a python environment or pretty much any other language where you need to assemble and install your own piecemeal toolkit to be productive.

Very excited to try Gleam more.

  • _flux
  • ·
  • 3 months ago
  • ·
  • [ - ]
> Gleam's labelled arguments are AFAIK a unique feature

OCaml also does this:

    # let foo ~bar:baz = baz + 42;;
    val foo : bar:int -> int = <fun>
    # foo ~bar:12765;;
    - : int = 12807
What I really enjoyed was its parameter forwarding, which is available in some languages for records:

    # let bar = 42;;
    val bar : int = 42
    # foo ~bar;;
    - : int = 84
Works similarly for optional values:

    # let test ?(arg=5) () = arg;;
    val test : ?arg:int -> unit -> int = <fun>
    # let arg = None;;
    val arg : 'a option = None
    # test ?arg ();;
    - : int = 5
And if you used ~arg, it would be a non-optional parameter:

    # let arg = 5;;
    val arg : int = 5
    # test ~arg ();;
    - : int = 5
Overall pretty spiffy. Partial application also works here nicely, but it does cause some trouble with optional arguments :/. For example, here "test" needs to have a final argument () so the optional parameter can be erased:

    # test;;
    - : ?arg:int -> unit -> int = <fun>
    # test ();;
    - : int = 5
> Gleam's labelled arguments are AFAIK a unique feature. This lets you have named function argument appear differently to the caller than they do internally. The caller may way to refer by verb ("multiply_by") and the function may want to refer to it as a noun ("multiplier").

Swift has this feature as well.

> perhaps a chimera of Rust, Go, ML and Lisp

I think it is pretty much just ML on the beam? It resembles rust and go maybe in the ways that they have also been influenced by ML. The syntax is fairly C-like which I guess resembles rust more than ocaml but I think that's surface level. I don't know rust or go but I don't see any concepts in gleam that aren't familiar either from ocaml or erlang.

Lisp I don't see it at all, except treating functions as entities in their own right, which is a nearly universal language feature at this point and doesn't ring as particularly lispish to me anymore.

IMO the design is extremely similar to elm. It feels like the author wanted elm but instead of targeting frontend+javascript, it targets beam+backend. It also seems to make very similar tradeoffs to OCaml.

Which is great, because elm is probably the best programming language I've ever used.

[dead]
> Specifying what a function returns in Gleam is optional; due to type inference, Gleam will understand it anyway.

Oh yuck, I don't like that.

Function signatures are for HUMANS to read. Sure, the compiler can reach into a function implementation to glean what it returns, but that's no reason to force a human reader waste time on it as well.

"I'll accept this and this from you and then return that kind of thing to you."

vs

"I'll accept this and this from you and then return something to you that your supervisor will tell you if you're using correctly or not - or you could watch me as I work to see what it is I've made for you."

Generally the advice is that public functions should have type annotations, and private functions needn't if you don't want to. But I could see arguments for either actually. Sometimes it's a bit of an annoying overhead if it's obvious what the function returns, and it certainly helps make prototyping code feel nice, especially since everything's typesafe anyway.

As I said in another comment, as the compiler knows the type of everything, it would be easy to have the LSP overlay the type (or annotate it into the text via code action for you).

  • lpil
  • ·
  • 3 months ago
  • ·
  • [ - ]
I think if I were making Gleam again from scratch it'd be required, them being optional was somewhat inherited as Gleam is an ML family language.

Luckily Gleam programmers do write return annotations in practice.

As a user of a package, you see the signature including the return type in the generated documenation and in your editor through the LSP which is part of the core language, so it's not really an issue.

But I agree that specifying the returns explicitly is a bit nicer when reading the source, don't think it's a big issue that it's optional though.

But what if the inference generates a contract based on the narrowest scope of the types returned, and then later you add a wider type (because that's what you'd originally intended but didn't have any actual use cases at the time) and now everyone's code blows up because they assumed the narrower type and don't handle the wider one?

If you were forced to specify the return type, your original intent becomes clear for all (rather than leaving it to a machine that can only assume the narrowest scope).

  • lpil
  • ·
  • 3 months ago
  • ·
  • [ - ]
Gleam doesn't have subtyping, so this drawback is impossible. Its type system is similar to OCaml, Elm, F#, etc.
It won't, because there is no type narrowing in Gleam currently, the type system is also very different to something like Typescript, I just don't see what you are describing as something that could realistically happen, but if you have an example I would be curious.

If the return type changes, the code using the function should definitely blow up until it's changed, don't think that's a bad thing.

  • scns
  • ·
  • 3 months ago
  • ·
  • [ - ]
> If the return type changes, the code using the function should definitely blow up until it's changed, don't think that's a bad thing.

This. The big upside of algebraic data types. The function won't blow up though, the compiler shows you every piece of code where the added type needs to be handled and therefore ensures the program runs correctly.

I usually also like type annotations for return types. The problem I'm having with my PL is with generics: If the return type depends on T but is not T. If I infer the return type it's obvious.
With the right tooling, this is a non-issue. Compare with Scala, a language that also allows you to omit return types (except for recursive functions).

IntelliJ (the best Scala IDE) now has something called X-Ray mode, see the videos here: https://blog.jetbrains.com/scala/2023/12/21/the-x-ray-mode/

Alternatively, it also allows you to configure it to always show the inferred types on functions.

Before, I annotated things a lot, but after having this I've never felt the need to annotate functions ever again. I still do it for certain things though but for other reasons than readability (e.g. for type-driven development).

I thought so too, but then I used OCaml and found myself very productive without explicitly annotating everything compared to Haskell. It gives you the speed of dynamic languages with the correctness of a static language. The LSP should be able to annotate for you.

The cases where I think it is necessary to annotate, I can do so.

> cases where I think it is necessary to annotate

Worth noting that that can be reactive, too. If an type checking error message is confusing or insufficiently localized, pinning down a few things around it can clarify.

As tome says, though, that can be a thing in Haskell too. There exist situations in Haskell where the compiler cannot figure out the types for you, but they're not most code.

  • tome
  • ·
  • 3 months ago
  • ·
  • [ - ]
I'm curious about what you mean by 'compared to Haskell'. You don't have to annotate everything in Haskell either, do you?
You don't, but the convention in Haskell is that you do. The linters I used complain for example if you don't annotate.
  • k_bx
  • ·
  • 3 months ago
  • ·
  • [ - ]
Very often in Elm/Haskell would I write the code first, then let my IDE insert the type annotation. Not always perfect, but often the case.
> Function signatures are for HUMANS to read

Nope, exactly the other way around. Function signatures are to help machines optimize your code. What humans should care about are names. It's just that we've normalized an extremely misguided practice of abusing type systems as a crutch for non-descriptive variable and function names.

Hard disagree. It's a contract between compiler and coder, and it needs to be exposed to both. Type information does not belong in names.

    // For example, this:
    fn get_driving_requirements(vehicle: Vehicle) -> String {
        ...
    }
    // Is just so much nicer than this:
    fn get_driving_requirement_description_from_vehicle(vehicle) {
        ...
    }
This is nicer for me (a human) to read, even if the compiler can infer the types of both.
> It's just that we've normalized an extremely misguided practice of abusing type systems as a crutch for non-descriptive variable and function names.

And maybe we just abuse formal logic because we've forgotten how to write good prose.

A seriously important point you make.

The little bit of time it takes to annotate the type that the function returns will save a lot of time when other people take over the codebase.

Humans are not great compilers and spending time try to figure out what the hell is returned from a function

> Humans are not great compilers and spending time try to figure out what the hell is returned from a function

That's why I leave it to the compiler and don't try to do it as a human.

If you aren't able to figure out the return type then something has gone wrong: maybe the language or code is too complex, or maybe you don't understand the code very well.
As was said above, humans aren't great at it. If I want to know the type of an expression, I ask the compiler.
> humans aren't great at it

Yeah that's fine, although my point is that being unable to figure out the return type is a symptom of a bigger problem imo. It means we're outsmarting ourselves.

Honestly, on this one, I see both sides. I think functions should be written so that there type signatures are readable in the code, but I'm not sure I want a language making something semantically incorrect just to discourage bad style.

Either way, I use (and honestly love) python on a daily basis and I wish my gripes with it were on this small a level. Yes, I'm looking at you, passed by assignment and late binding closure.

Doesn’t gleam come with a linter? If so, it’s likely that that can be enforced.
The designer of gleam is trying to make a linter unnecessary, by not having extraneous language features that need to be discouraged.

Guess it's not working 100% if someone feels this way! However, I would say that since the compiler/LSP knows the full type of every function it should be possible to have a code action that says 'annotate the type of this function for me'.

Yup that would be fine, so long as it's required to be done before you release your code. Putting return types on functions is signaling intent, much like left and right signaling on a car.

I want to know what your function COULD produce, not what it happens to produce now. Type inference can only show that in every current path you return a TYPE_X, and that works fine until one day you return an instance of its supertype TYPE_A, and now my code blows up because I misunderstood the contract. You may have always intended for it to return TYPE_A and its subtypes, but you never signaled that intent to others. This is why enforcing explicit contracts is necessary.

Well Gleam doesn't have supertypes, so the function signature (even when inferred) does describe everything your function could possibly produce. If the author changes that, your code will stop compiling, not blow up at runtime.

I agree that packages authors really should annotate their types though, and possibly that they ought to be forced to do so, just to make reading the code easier.

  • lpil
  • ·
  • 3 months ago
  • ·
  • [ - ]
Gleam doesn't have subtyping, so the drawback described here is not possible.
It's honestly nice when it's optional but with tooling understanding the inferred type. (Both LSP and docs) Sometimes you have a function with a single line implementation and an obvious return (bool or string). Having to add explicit types to everything becomes a bit silly.
I've explored Gleam on a side project and I like it. The only real issue I had at the time was that it was a bit too niche to use for anything "serious", but that seems to be changing - which is cool.

I really want to see a demonstration that it can work with Phoenix LiveView though, or a Gleam equivalent with good IDE and tooling support. The productivity of the Phoenix LivewView + Elixir stack, and the ability to build async message-driven systems in concert with the UI remains a killer feature for me.

I just downloaded livebook yesterday, and it will be a great thing for gleam when you can include code blocks for gleam in livebook.

Also, if you're looking to pick up elixir or Erlang, I don't think there's a better tool. It's a jupyter-style notebook that feels really really good

  • ikety
  • ·
  • 3 months ago
  • ·
  • [ - ]
I adore livebook and very much agree, elixir and gleam are both amazing, which makes choosing one for a specific task even harder. I keep a livebook running 24/7 on my home server, so I can always try out snippets anywhere. Gleam's typing is great, but not necessarily a reason for me to use it over elixir, especially with the new elixir system shaping up.

There's tons of pros to each. I guess I will continue to have fun with both!

It looks like a nice language with the {} syntax to tease all the C and Java developers out there. That's more important than it looks to help spreading the language.

The OTP example lets me state one of my few sore points about all the BEAM languages I worked with or looked at: the handle_this / handle_that madness. Search for "type AsyncTaskMessage" in the post to get there.

I don't want to write code like this (I omit the details, ... are ellipsis from mine)

  type AsyncTaskMessage {
    Increment(reply_to: Subject(Int))
    Decrement(reply_to: Subject(Int))
  }

  fn handle_async_task(message ...) {
    case message {
      Increment(client) -> {
        code ...
      }
      Decrement(client) -> {
        code ...
      }
      ...
   }

I want to write code like this

  type AsyncTaskMessage {
    Increment(reply_to: Subject(Int))
    Decrement(reply_to: Subject(Int))
  }

  fn Increment(client) {
    code ...
  }

  fn Decrement(client) {
    code ...
  }
or any variation of that, maybe this OO-ish one (after all this is a stateful object)

  type AsyncTaskMessage {
    fn Increment(reply_to: Subject(Int)) {
      code ...
    }

    fn Decrement(reply_to: Subject(Int)) {
      code ...
    }
  }
Maybe one of the next BEAM languages will handle that automatically for us.
  • lpil
  • ·
  • 3 months ago
  • ·
  • [ - ]
> Maybe one of the next BEAM languages will handle that automatically for us.

It not being automatic is a feature as that pattern is only what you want in the trivial case, but in real programs you are going to want more control.

In early Elixir there was a relatively popular macro library[1] that offered what you ask for (see code example below) but as Elixir matured and started being used in more than toy projects it was largely abandoned.

    defmodule Calculator do
      use ExActor.GenServer

      defstart start_link, do: initial_state(0)

      defcast inc(x), state: state, do: new_state(state + x)
      defcast dec(x), state: state, do: new_state(state - x)

      defcall get, state: state, do: reply(state)

      defcast stop, do: stop_server(:normal)
    end
[1]: https://github.com/sasa1977/exactor
Great. That was basically all we used of GenServers in the Elixir projects I worked on. I remember one or two handle_info but nearly all our GenServers were (in OO parlance) objects with methods to update their status, trigger actions and sometimes read the status.
Not sure what the macro story is in Gleam but in Elixir you could write Increment/Decrement macros that ties into the process API. Kinda contrived example though.
Could you possibly do this by transpiling to Gleam? Seems like the transformation is possibly simple enough that it could be done without implementing a new language.
Yeah, this syntax kinda sucks

  fn handle_async_task(message ...) {
    case message {
      Increment(client) -> {
In Erlang to "just call the damn function" I'd write it something like

  handle_cast(R=#message{type=increment}, S) -> work_module:increment(R, S);
  handle_cast(R=#message{type=decrement}, S) -> work_module:decrement(R, S).
Or I'd use maps instead of records.
It looks like Gleam doesn't allow for multiple function heads. That is one of my favorite features of Elixir as it reduces complexity.
Oh that's very sad! That's one of the great features of Elixir/Erlang.

I wonder why they decided to not permit that sort of thing. :(

This is a sweet blog post, I'm really enjoying Gleam for its simplicity, I'm excited for gleam_otp to mature more eventually
100% agree with this. Some success stories with Gleam OTP in the wild will be amazing. I'm sure they will come
  • Keats
  • ·
  • 3 months ago
  • ·
  • [ - ]
Has anyone very familiar to Elixir tried Gleam? I've been eyeing Elixir for years but I would miss types. Gleam looks nice but you lose Phoenix/Liveview which is 90% of the appeal of Elixir to me.
I've used both and like both. It's pretty smooth to use gleam code from elixir, so you could implement core data handling and business logic in gleam then still just use phoenix for the webapp layer.

But also just try elixir? In a lot of ways it handles like a typed language because of exhaustive pattern match and being able guard/match in function parameters. There's not much practical difference between matching on Some(data) or {:ok, data}. I prefer having types too, but for everything elixir (or erlang for that matter) gives you, it's a manageable compromise.

Anyway elixir is getting a type system right now. I like gleam a lot but I'm not sure it's really aiming to be a universal elixir replacement.

Why does the author progressively spell “vehicle,” worse throughout this post!?

Other than that, Gleam seems pretty neat. Wish it the best of luck.

The main reason for using a BEAM language is to leverage multicore/concurrency conveniently and reliably. Gleam's OTP seems not as mature as that of Erlang or Elixir. If you want to write simple programs, then Gleam seems okay. If you want mature multicore or concurrent use, then I would think twice.

Here is a critical review of the officially listed limitations of Gleam's OTP:

1. Lack of Support for Named Processes: This is a significant limitation because named processes are essential for easy reference and communication between different parts of the system. Without support, developers might resort to workarounds that could lead to errors and inefficiencies.

2. Untyped Global Mutable Variables: The use of untyped global mutable variables introduces potential risks, such as runtime errors and unpredictable behavior, especially since they can be uninitialized. This undermines the type safety and reliability generally expected in functional programming environments.

3. Limited Actor Abstractions: The current scarcity of actor abstractions could restrict developers from implementing more complex or varied concurrency patterns, limiting the utility of the library until more abstractions are added.

4. Unsupported OTP System Messages: Dropping unsupported system messages can lead to unexpected behavior and bugs that are difficult to trace. Full support for OTP system messages is crucial for reliable and predictable actor-based concurrency.

5. Uniform Shutdown Period for Supervisors' Children: Not having different shutdown periods for child processes, especially for child supervisors, can lead to improper shutdowns and potential data loss or corruption. This deviates from the behavior in Erlang and Elixir, where more granular control is available.

6. Limited Testing: The lack of extensive testing, both in unit tests and real-world applications, indicates that the library might be unstable or have undiscovered bugs. This could affect its adoption and reliability in production environments.

[flagged]
Hardly, this is an accurate, insightful take and maybe just reads as stilted grammar/style to you.
The author mentions lustre, the web framework for Gleam that was inspired by Elm. I really like Elm. However, the limitation for me was the lack of a really robust component library. When I say component library, I don’t mean aesthetics but function. For example, a feature complete search bar with autocomplete, multiselect, create, delete, clear, etc. functionality. The reason I use typescript is because of component libraries like Mantine where generic components such as a search bar are already implemented, fully functional and accessible. I hope someone sees this gap and tries to fill it so that functional languages can be viable on the web!
Author of Lustre here! Yeah I agree, lack of a robust headless component library like radix-ui or react-aria is such a mark against Elm. I think it's even more important for Lustre because really all our users are going to be backendy folks with little interest in the nitty gritty frontend stuff.

Lustre will eventually have a companion ui library to plug this gap but it turns out maintaining all this stuff is a bit of a time sink, so I'm taking baby steps ^.^

ReScript is really solid in this niche IMO. It's basically ocaml that compiles to js. The standard library includes excellent bindings to react, so you can use react components without too much extra work. You do have to write your own type annotations for them, but it's way way less trouble than elm ports.
  • lpil
  • ·
  • 3 months ago
  • ·
  • [ - ]
I believe this is on the roadmap for the Lustre framework.
I love the rust-like syntax. I write a lot of Rust and there have been times where I wished for a language on top of Rust with a tiny bit of OO patterns and a GC, so the borrow checker doesn't get in my way.
> All you need to get started with Gleam is already in this CLI. You dont’t have ANYTHING else check, there’s ZERO decision paralysis: THIS-IS-WHAT-WE-WANT. Javascript makes you pick a tool among hundred of options, for each tool provided in Gleam’s CLI.

This is undoubtedly true, although if Gleam ever becomes as popular as JavaScript, there will almost inevitably be the same set of choices to be made for Gleam.

I'm not so sure. Both Go and Rust seem like examples of very popular ecosystems with a set of common, "blessed" tooling. (They didn't have everything from the get-go, e.g. rust-analyzer was very popular before it became part of the Rust project, but it still demonstrates that a large ecosystem can rally around a set of common tooling.)
That's a fair point, but I don't think it's really fair to say Gleam has everything necessary included when it's brand new. Because it currently doesn't suffer from the problems of the JS ecosystem doesn't mean it won't in the future.

(As another example besides rust-analyzer, there's also all the Go dependency management tools that existed before native Go modules, e.g. Dep, Glide, Go Package Manager, etc.)

> This is undoubtedly true, although if Gleam ever becomes as popular as JavaScript, there will almost inevitably be the same set of choices to be made for Gleam.

Huh? I don't think so at all, Go and Rust did this same thing (include great and easy to use tooling as part of the language), they are super popular now and the tooling is still great.

Go originally didn't include a great way to manage dependencies. That fortunately got fixed. I think my initial statement of it being inevitable was an exaggeration. My point was that I don't think you can fairly compare a new language with one that has existed for decades as if there's not a chance it will end up having similar problems.
  • ·
  • 3 months ago
  • ·
  • [ - ]
One minor nitpick that irks me more than it probably should: in Gleam, 1 / 0 gives you 0.
bro, just wait til you learn that gleam has something similar to monad. now pass me the joint (also a compliment)
[dead]
  • ·
  • 3 months ago
  • ·
  • [ - ]
Impossibly slow, just use F#.
A bit of a dismissive comment.

However, there is a point here: F# offers the things that people are getting all excited about with Gleam (pragmatic functional programming in the ML school) and yet receives little hype.

Perhaps it need a Beam compiler target?

Hey, article author here!

I knew about F# since my university days and I'm guilty for not really dig into it except for toys projects. I worked in many companies and most of them were Microsoft hostile. That would explain why I'm kind of far away from the .NET affiliated technologies. We all agree that this is a silly reason, but if you wonder why F# doesn't get any love, I'd bet that this is because a lot of people are ignoring Microsoft. Let's change that

Perceptions are slow to shift.

Nowadays, you can use .NET on Linux with VS Code and Ionide (open-source LSP). No need to ever talk to a Microsoft sales rep :)

[dead]
This would make it much slower and reduce the amount of libraries it can use by a factor of 10, maybe more, without improving its concurrency capabilities, possibly degrading them (because .NET threadpool and task system are robust and low overhead).

But I guess that could make HN like it instead?

  • lpil
  • ·
  • 3 months ago
  • ·
  • [ - ]
I think you may be thinking of some other language. Gleam adds no overhead to the target platform used and both the Erlang VM and JavaScript runtimes have respectable performance. The Erlang VM specifically outcompetes F# for networked services, which is its main use case.
This applies to the entire BEAM family. HN loves to sign praises to it yet never looks into the detail nor has performance intuition to know that the use of predominantly interpreted languages is frequently inappropriate (note how Erlang is next to Ruby):

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

With that said, I had no idea it could target JS and will need to look into that, thanks (V8 is surprisingly competent at number crunching if it can inline everything, otherwise still subject to harsh limitations of dynamic nature of JS).

Also, F#, building on top of .NET, has rich ecosystem and a set of its own async abstractions with all kinds of actor patterns being popular (within very small F# community that is). It’s fast, and so is its networking stack.

Why am I criticizing Gleam? Because we had two decades of programming languages built on slow foundations imposing strict performance ceiling. There is no reason for nice languages like Gleam to be slow, yet they are. I’m not sure why, maybe because of the myth of compiled languages requiring more effort and interpreted languages being more productive to write in?

> benchmarksgame/

I suppose if you want to do some distributed mandelbrot in Erlang, you'd just use NIFs to use an actual systems language for compute.

> maybe because of the myth of compiled languages requiring more effort and interpreted languages being more productive to write in?

Erlang focuses it's ergonomics around distributed processing, dispatch, and communication. Some of that is really in the decisions around the language (e.g., fully functional, hot-swappable and remote dispatchable modules (using message passing), metaprogramming, pattern matching), the runtime (e.g., genserver, the supervisor, lightweight processes with isolated garbage collection), and the core libraries (e.g., ETS, http client/server), but also it's the ecosystem that has built around the Erlang because of its soft real-time flavor. If things like soft real-time, high process count, network dispatch, etc. aren't really interesting to you, and the language isn't your cup of tea, then you aren't the target market for Erlang and its ilk. But certainly it is useful and productive for a number of people/orgs who've made the leap to OTP based on actual business value (e.g., Discord and (historically) Whatsapp)

  • igouy
  • ·
  • 3 months ago
  • ·
  • [ - ]
The benchmarks game website does quote the Erlang FAQ:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

"Make it work, then make it beautiful, then if you really, really have to, make it fast. 90 percent of the time, if you make it beautiful, it will already be fast. So really, just make it beautiful!"

– Joe Armstrong

This together with misunderstanding Donald Knuth's quote (it was talking about not hand-writing ASM rather than throwing the baby out with the bathwater) is why we don't have nice things in places we are ought to.

You have a set of tools with comparable degrees of features and productivity*, but one of them comes at a steep performance cost, perhaps you might want to pick the one that is snappier and easier to use?

* Though one could argue F# offers more in regular niceties at the cost of not having distributed framework built-in, requiring you to reach for Akka. I certainly find F# significantly easier to read and write than Erlang.

Elixir is one of the most productive languages there is right now.

Run into something slow? Replace that small bit with a Rust NIF and move on with your life.

Or write it in a language where you don't even have to think about it.

(you have to maintain separate toolchain and exports in Rust, to achieve this, and complex types cannot be easily exported/imported - it is subject to limitations of C ABI, and FFI is not free either, even when as cheap as it gets - .NET has that with P/Invoke yet still it is a deoptimization in data-intensive algorithms, which benefit from being rewritten in C#/F#)

Complex Rust types are absolutely supported with Rustler. And with one command I can have Elixir pull a Rust crate and do 80% of the work setting it up for me to be able to use it in a project.

https://github.com/rusterlium/rustler

  • scns
  • ·
  • 3 months ago
  • ·
  • [ - ]
Picking Erlang for number crunching is a Bad Idea™. You can use NIFs in C or Rust though for expensive computations. If you play to its' strengths it outshines the competition eg WhatsApp and 99.999999999% of uptime (that is a few seconds of downtime per year). Horses for courses.

[Edit]

Zero snark intended.

https://www.youtube.com/watch?v=JvBT4XBdoUE

> HN loves to sign praises to it yet never looks into the detail nor has performance intuition to know that the use of predominantly interpreted languages is frequently inappropriate (note how Erlang is next to Ruby):

And you seem singularly focused in your belief that .Net is "the answer" every time a post promoting a BEAM ecosystem language comes up. It's clear you like .Net - good for you. It's solid and has nice features. But you're painting performance as some sort of absolute. It's not.

> Also, F#, building on top of .NET, has rich ecosystem and a set of its own async abstractions with all kinds of actor patterns being popular

What if I don't want "all sorts of async abstractions", but just one that works well?

> Why am I criticizing Gleam? Because we had two decades of programming languages built on slow foundations imposing strict performance ceiling.

And those programming languages have been used to develop sites like GitHub, WhatsApp, Facebook and countless other internet-scale apps. Every language and ecosystem imposes some form of performance ceiling. If performance was all that mattered, we'd all be writing assembler. It's about trade-offs: some of them technical, some of them social.

.Net is a mature, stable, and performant ecosystem. You do it a disservice by rubbishing alternatives per your original comment ("Impossibly slow, just use F#").

--

EDIT: fixed spelling & grammar

  • lawn
  • ·
  • 3 months ago
  • ·
  • [ - ]
You're not seeing the fundamental trade-off that BEAM is taking.

They're not focusing on maximizing througput (number crunching) but on minimizing latency for the 99th percentile (keeping the server highly responsive even under heavy load).

You really need to understand the pros- and cons of each tool. Dismissing the BEAM family because of a single attribute strikes me as a bit ignorant.

  • lpil
  • ·
  • 3 months ago
  • ·
  • [ - ]
I'm sorry, I don't understand what point you're trying to make here, it seems unrelated to Gleam.
For someone who's advocating a functional programming language, this is a weirdly terse "don't use this for performance reasons" take.
  • igouy
  • ·
  • 3 months ago
  • ·
  • [ - ]
"The most common class of 'less suitable' problems is characterised by performance being a prime requirement and constant-factors having a large effect on performance."

FAQ 1.4 What sort of problems is Erlang not particularly suitable for?

https://www.erlang.org/faq/introduction#idm53