The BEAM is the underlying VM used by Erlang, Elixir and Gleam. The BEAM provides the basic primitives like spawning processes, sending messages, handling messages etc. Processes are lightweight pre-emptively scheduled tasks, similar to go-routines or green-threads in other languages. These primitives are mostly lower level than you want to deal with on a day to day basis.
The OTP is a standard library built on top of that, to provide a convenient, battle-tested way of building systems. The GenServer is the most well known component - it simplifies writing code that handles messages one by one, updating held state, and sending replies. And bits around it to the caller, sending that message and getting to the reply is just like making a function call. GenServers are then managed by Supervisors that know what to restart when something crashes.
One notable difference between Elixir and Gleam is that Elixir gets to just re-use the OTP code as-is (with some Elixir wrappers on top for convenience). Gleam concluded that the OTP is built expecting dynamic types, and that for best results in Gleam they'd need to re-implement the key primitives. That's why the example shown is an "Actor" not a GenServer - it serves the same purpose, and might even fit in a Supervision tree, but isn't actually a GenServer.
I'm a data engineer so unless I feel like doing a lot of library development, I'll probably wait on the data ecosystem to advance a little bit more.
I hope it does some day, the BEAM feels like the perfect place for a distributed dataframe library to live.
I wrote Vleam to help incorporate Gleam into a Vue project, if you have one already
For the most part, Gleam feels like it has gathered the best ideas from some interesting langs (perhaps a chimera of Rust, Go, ML and Lisp) and put them into one coherent, simple language. It's not that the ideas are new, they're just very well organized.
Gleam's labelled arguments are AFAIK a unique feature. (Edit: Nope see comments below) This lets you have named function argument appear differently to the caller than they do internally. The caller may way to refer by verb ("multiply_by") and the function may want to refer to it as a noun ("multiplier").
The `use <- ...` expression and pipeline operator `|>` can reduce boilerplate with nested function calls in a lot of cases. It's a nod to the ergonomic benefits of metaprograming without giving the user a full macro system.
Finally Gleam's tooling is the best in the business, hands down. The entire language, the compiler, package manager, LSP, etc, is all built into a single binary with an ergnomic CLI. Compared that to cobbling together a python environment or pretty much any other language where you need to assemble and install your own piecemeal toolkit to be productive.
Very excited to try Gleam more.
OCaml also does this:
# let foo ~bar:baz = baz + 42;;
val foo : bar:int -> int = <fun>
# foo ~bar:12765;;
- : int = 12807
What I really enjoyed was its parameter forwarding, which is available in some languages for records: # let bar = 42;;
val bar : int = 42
# foo ~bar;;
- : int = 84
Works similarly for optional values: # let test ?(arg=5) () = arg;;
val test : ?arg:int -> unit -> int = <fun>
# let arg = None;;
val arg : 'a option = None
# test ?arg ();;
- : int = 5
And if you used ~arg, it would be a non-optional parameter: # let arg = 5;;
val arg : int = 5
# test ~arg ();;
- : int = 5
Overall pretty spiffy. Partial application also works here nicely, but it does cause some trouble with optional arguments :/. For example, here "test" needs to have a final argument () so the optional parameter can be erased: # test;;
- : ?arg:int -> unit -> int = <fun>
# test ();;
- : int = 5
Swift has this feature as well.
I think it is pretty much just ML on the beam? It resembles rust and go maybe in the ways that they have also been influenced by ML. The syntax is fairly C-like which I guess resembles rust more than ocaml but I think that's surface level. I don't know rust or go but I don't see any concepts in gleam that aren't familiar either from ocaml or erlang.
Lisp I don't see it at all, except treating functions as entities in their own right, which is a nearly universal language feature at this point and doesn't ring as particularly lispish to me anymore.
Which is great, because elm is probably the best programming language I've ever used.
Oh yuck, I don't like that.
Function signatures are for HUMANS to read. Sure, the compiler can reach into a function implementation to glean what it returns, but that's no reason to force a human reader waste time on it as well.
"I'll accept this and this from you and then return that kind of thing to you."
vs
"I'll accept this and this from you and then return something to you that your supervisor will tell you if you're using correctly or not - or you could watch me as I work to see what it is I've made for you."
As I said in another comment, as the compiler knows the type of everything, it would be easy to have the LSP overlay the type (or annotate it into the text via code action for you).
Luckily Gleam programmers do write return annotations in practice.
But I agree that specifying the returns explicitly is a bit nicer when reading the source, don't think it's a big issue that it's optional though.
If you were forced to specify the return type, your original intent becomes clear for all (rather than leaving it to a machine that can only assume the narrowest scope).
If the return type changes, the code using the function should definitely blow up until it's changed, don't think that's a bad thing.
This. The big upside of algebraic data types. The function won't blow up though, the compiler shows you every piece of code where the added type needs to be handled and therefore ensures the program runs correctly.
IntelliJ (the best Scala IDE) now has something called X-Ray mode, see the videos here: https://blog.jetbrains.com/scala/2023/12/21/the-x-ray-mode/
Alternatively, it also allows you to configure it to always show the inferred types on functions.
Before, I annotated things a lot, but after having this I've never felt the need to annotate functions ever again. I still do it for certain things though but for other reasons than readability (e.g. for type-driven development).
The cases where I think it is necessary to annotate, I can do so.
Worth noting that that can be reactive, too. If an type checking error message is confusing or insufficiently localized, pinning down a few things around it can clarify.
As tome says, though, that can be a thing in Haskell too. There exist situations in Haskell where the compiler cannot figure out the types for you, but they're not most code.
Nope, exactly the other way around. Function signatures are to help machines optimize your code. What humans should care about are names. It's just that we've normalized an extremely misguided practice of abusing type systems as a crutch for non-descriptive variable and function names.
// For example, this:
fn get_driving_requirements(vehicle: Vehicle) -> String {
...
}
// Is just so much nicer than this:
fn get_driving_requirement_description_from_vehicle(vehicle) {
...
}
This is nicer for me (a human) to read, even if the compiler can infer the types of both.And maybe we just abuse formal logic because we've forgotten how to write good prose.
The little bit of time it takes to annotate the type that the function returns will save a lot of time when other people take over the codebase.
Humans are not great compilers and spending time try to figure out what the hell is returned from a function
That's why I leave it to the compiler and don't try to do it as a human.
Yeah that's fine, although my point is that being unable to figure out the return type is a symptom of a bigger problem imo. It means we're outsmarting ourselves.
Either way, I use (and honestly love) python on a daily basis and I wish my gripes with it were on this small a level. Yes, I'm looking at you, passed by assignment and late binding closure.
Guess it's not working 100% if someone feels this way! However, I would say that since the compiler/LSP knows the full type of every function it should be possible to have a code action that says 'annotate the type of this function for me'.
I want to know what your function COULD produce, not what it happens to produce now. Type inference can only show that in every current path you return a TYPE_X, and that works fine until one day you return an instance of its supertype TYPE_A, and now my code blows up because I misunderstood the contract. You may have always intended for it to return TYPE_A and its subtypes, but you never signaled that intent to others. This is why enforcing explicit contracts is necessary.
I agree that packages authors really should annotate their types though, and possibly that they ought to be forced to do so, just to make reading the code easier.
I really want to see a demonstration that it can work with Phoenix LiveView though, or a Gleam equivalent with good IDE and tooling support. The productivity of the Phoenix LivewView + Elixir stack, and the ability to build async message-driven systems in concert with the UI remains a killer feature for me.
Also, if you're looking to pick up elixir or Erlang, I don't think there's a better tool. It's a jupyter-style notebook that feels really really good
There's tons of pros to each. I guess I will continue to have fun with both!
The OTP example lets me state one of my few sore points about all the BEAM languages I worked with or looked at: the handle_this / handle_that madness. Search for "type AsyncTaskMessage" in the post to get there.
I don't want to write code like this (I omit the details, ... are ellipsis from mine)
type AsyncTaskMessage {
Increment(reply_to: Subject(Int))
Decrement(reply_to: Subject(Int))
}
fn handle_async_task(message ...) {
case message {
Increment(client) -> {
code ...
}
Decrement(client) -> {
code ...
}
...
}
I want to write code like this type AsyncTaskMessage {
Increment(reply_to: Subject(Int))
Decrement(reply_to: Subject(Int))
}
fn Increment(client) {
code ...
}
fn Decrement(client) {
code ...
}
or any variation of that, maybe this OO-ish one (after all this is a stateful object) type AsyncTaskMessage {
fn Increment(reply_to: Subject(Int)) {
code ...
}
fn Decrement(reply_to: Subject(Int)) {
code ...
}
}
Maybe one of the next BEAM languages will handle that automatically for us.It not being automatic is a feature as that pattern is only what you want in the trivial case, but in real programs you are going to want more control.
In early Elixir there was a relatively popular macro library[1] that offered what you ask for (see code example below) but as Elixir matured and started being used in more than toy projects it was largely abandoned.
defmodule Calculator do
use ExActor.GenServer
defstart start_link, do: initial_state(0)
defcast inc(x), state: state, do: new_state(state + x)
defcast dec(x), state: state, do: new_state(state - x)
defcall get, state: state, do: reply(state)
defcast stop, do: stop_server(:normal)
end
[1]: https://github.com/sasa1977/exactor fn handle_async_task(message ...) {
case message {
Increment(client) -> {
In Erlang to "just call the damn function" I'd write it something like handle_cast(R=#message{type=increment}, S) -> work_module:increment(R, S);
handle_cast(R=#message{type=decrement}, S) -> work_module:decrement(R, S).
Or I'd use maps instead of records.I wonder why they decided to not permit that sort of thing. :(
But also just try elixir? In a lot of ways it handles like a typed language because of exhaustive pattern match and being able guard/match in function parameters. There's not much practical difference between matching on Some(data) or {:ok, data}. I prefer having types too, but for everything elixir (or erlang for that matter) gives you, it's a manageable compromise.
Anyway elixir is getting a type system right now. I like gleam a lot but I'm not sure it's really aiming to be a universal elixir replacement.
Other than that, Gleam seems pretty neat. Wish it the best of luck.
Here is a critical review of the officially listed limitations of Gleam's OTP:
1. Lack of Support for Named Processes: This is a significant limitation because named processes are essential for easy reference and communication between different parts of the system. Without support, developers might resort to workarounds that could lead to errors and inefficiencies.
2. Untyped Global Mutable Variables: The use of untyped global mutable variables introduces potential risks, such as runtime errors and unpredictable behavior, especially since they can be uninitialized. This undermines the type safety and reliability generally expected in functional programming environments.
3. Limited Actor Abstractions: The current scarcity of actor abstractions could restrict developers from implementing more complex or varied concurrency patterns, limiting the utility of the library until more abstractions are added.
4. Unsupported OTP System Messages: Dropping unsupported system messages can lead to unexpected behavior and bugs that are difficult to trace. Full support for OTP system messages is crucial for reliable and predictable actor-based concurrency.
5. Uniform Shutdown Period for Supervisors' Children: Not having different shutdown periods for child processes, especially for child supervisors, can lead to improper shutdowns and potential data loss or corruption. This deviates from the behavior in Erlang and Elixir, where more granular control is available.
6. Limited Testing: The lack of extensive testing, both in unit tests and real-world applications, indicates that the library might be unstable or have undiscovered bugs. This could affect its adoption and reliability in production environments.
Lustre will eventually have a companion ui library to plug this gap but it turns out maintaining all this stuff is a bit of a time sink, so I'm taking baby steps ^.^
This is undoubtedly true, although if Gleam ever becomes as popular as JavaScript, there will almost inevitably be the same set of choices to be made for Gleam.
(As another example besides rust-analyzer, there's also all the Go dependency management tools that existed before native Go modules, e.g. Dep, Glide, Go Package Manager, etc.)
Huh? I don't think so at all, Go and Rust did this same thing (include great and easy to use tooling as part of the language), they are super popular now and the tooling is still great.
However, there is a point here: F# offers the things that people are getting all excited about with Gleam (pragmatic functional programming in the ML school) and yet receives little hype.
Perhaps it need a Beam compiler target?
I knew about F# since my university days and I'm guilty for not really dig into it except for toys projects. I worked in many companies and most of them were Microsoft hostile. That would explain why I'm kind of far away from the .NET affiliated technologies. We all agree that this is a silly reason, but if you wonder why F# doesn't get any love, I'd bet that this is because a lot of people are ignoring Microsoft. Let's change that
Nowadays, you can use .NET on Linux with VS Code and Ionide (open-source LSP). No need to ever talk to a Microsoft sales rep :)
But I guess that could make HN like it instead?
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
With that said, I had no idea it could target JS and will need to look into that, thanks (V8 is surprisingly competent at number crunching if it can inline everything, otherwise still subject to harsh limitations of dynamic nature of JS).
Also, F#, building on top of .NET, has rich ecosystem and a set of its own async abstractions with all kinds of actor patterns being popular (within very small F# community that is). It’s fast, and so is its networking stack.
Why am I criticizing Gleam? Because we had two decades of programming languages built on slow foundations imposing strict performance ceiling. There is no reason for nice languages like Gleam to be slow, yet they are. I’m not sure why, maybe because of the myth of compiled languages requiring more effort and interpreted languages being more productive to write in?
I suppose if you want to do some distributed mandelbrot in Erlang, you'd just use NIFs to use an actual systems language for compute.
> maybe because of the myth of compiled languages requiring more effort and interpreted languages being more productive to write in?
Erlang focuses it's ergonomics around distributed processing, dispatch, and communication. Some of that is really in the decisions around the language (e.g., fully functional, hot-swappable and remote dispatchable modules (using message passing), metaprogramming, pattern matching), the runtime (e.g., genserver, the supervisor, lightweight processes with isolated garbage collection), and the core libraries (e.g., ETS, http client/server), but also it's the ecosystem that has built around the Erlang because of its soft real-time flavor. If things like soft real-time, high process count, network dispatch, etc. aren't really interesting to you, and the language isn't your cup of tea, then you aren't the target market for Erlang and its ilk. But certainly it is useful and productive for a number of people/orgs who've made the leap to OTP based on actual business value (e.g., Discord and (historically) Whatsapp)
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
– Joe Armstrong
You have a set of tools with comparable degrees of features and productivity*, but one of them comes at a steep performance cost, perhaps you might want to pick the one that is snappier and easier to use?
* Though one could argue F# offers more in regular niceties at the cost of not having distributed framework built-in, requiring you to reach for Akka. I certainly find F# significantly easier to read and write than Erlang.
Run into something slow? Replace that small bit with a Rust NIF and move on with your life.
(you have to maintain separate toolchain and exports in Rust, to achieve this, and complex types cannot be easily exported/imported - it is subject to limitations of C ABI, and FFI is not free either, even when as cheap as it gets - .NET has that with P/Invoke yet still it is a deoptimization in data-intensive algorithms, which benefit from being rewritten in C#/F#)
[Edit]
Zero snark intended.
And you seem singularly focused in your belief that .Net is "the answer" every time a post promoting a BEAM ecosystem language comes up. It's clear you like .Net - good for you. It's solid and has nice features. But you're painting performance as some sort of absolute. It's not.
> Also, F#, building on top of .NET, has rich ecosystem and a set of its own async abstractions with all kinds of actor patterns being popular
What if I don't want "all sorts of async abstractions", but just one that works well?
> Why am I criticizing Gleam? Because we had two decades of programming languages built on slow foundations imposing strict performance ceiling.
And those programming languages have been used to develop sites like GitHub, WhatsApp, Facebook and countless other internet-scale apps. Every language and ecosystem imposes some form of performance ceiling. If performance was all that mattered, we'd all be writing assembler. It's about trade-offs: some of them technical, some of them social.
.Net is a mature, stable, and performant ecosystem. You do it a disservice by rubbishing alternatives per your original comment ("Impossibly slow, just use F#").
--
EDIT: fixed spelling & grammar
They're not focusing on maximizing througput (number crunching) but on minimizing latency for the 99th percentile (keeping the server highly responsive even under heavy load).
You really need to understand the pros- and cons of each tool. Dismissing the BEAM family because of a single attribute strikes me as a bit ignorant.
FAQ 1.4 What sort of problems is Erlang not particularly suitable for?