For those that don't know its also built upon OTP, the erlang vm that makes concurrency and queues a trivial problem in my opinion.
Absolutely wonderful ecosystem.
I've been wanting to make Gleam my primary language, but I fear LLMs have frozen programming language advancement and adoption for anything past 2021.
But I am hopeful that Gleam has slid just under the closing door and LLMs will get up to speed on it fast.
In fact, I'd say most of the Gleam code that has been generated has been surprisingly reliable and easy to reason about. I suspect this has to do with the static typing, incredible language tooling, and small surface area of the language.
I literally just copy the docs from https://tour.gleam.run/everything/ into a local MD file and let it run. Packages are also well documented, and Claude has had no issue looping with tests/type checking.
In the past month I've built the following, all primarily with Claude writing the Gleam parts:
- A websocket-first analytics/feature flag platform (Gleam as the backend): https://github.com/devdumpling/beacon
- A realtime holiday celebration app for my team where Gleam manages presence, cursor state, emojis, and guestbook writes (still rough): https://github.com/devdumpling/snowglobe
- A private autobattler game backend built for the web
While it's obviously not as well-trodden as building in typescript or Go or Rust, I've been really happy with the results as someone not super familiar with the BEAM/Erlang.
EDIT: Sorry I don't have demos up for these yet. Wasn't really ready to share them but felt relevant to this thread.
Why would that be the case? Many models have knowledge cutoffs in this calendar year. Furthermore I’ve found that LLMs are generally pretty good at picking up new (or just obscure) languages as long as you have a few examples. As wide and varied as programming languages are, syntactically and ideologically they can only be so different.
Because LLMs make it that much faster to develop software, any potential advantage you may get from adopting a very niche language is overshadowed by the fact that you can't use it with an LLM. This makes it that much harder for your new language to gain traction. If your new language doesn't gain enough traction, it'll never end up in LLM datasets, so programmers are never going to pick it up.
I feel as though "facts" such as this are presented to me all the time on HN, but in my every day job I encounter devs creating piles of slop that even the most die-hard AI enthusiasts in my office can't stand and have started to push against.
I know, I know "they just don't know how to use LLMs the right way!!!", but all of the better engineers I know, the ones capable of quickly assessing the output of an LLM, tend to use LLMs much more sparingly in their code. Meanwhile the ones that never really understood software that well in the first place are the ones building agent-based Rube Goldberg machines that ultimately slow everyone down
If we can continue living in the this AI hallucination for 5 more years, I think the only people capable of producing anything of use or value will be devs that continued to devote some of their free time to coding in languages like Gleam, and continued to maintain and sharpen their ability to understand and reason about code.
* One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests.
* One developer has an LLM making his PRs. He slurped up my unfinished branch, PRed it, and merged (!) it. One can only guess that the approved was also using an LLM. When I asked him why he did it, he was completely baffled and assured me he would never. Source control tells a different story.
* And I forgot to turn off LLM auto complete after setting up my new machine. The LLM wouldn't stop hallucinating non-existent constructors for non-existent classes. Bog standard intellisense did in seconds what I needed after turning off LLM auto complete.
LLMs sometimes save me some time. But overall I'm sitting at a pretty big amount of time wasted by them that the savings have not yet offset.
So the LLM was not told how to run the tests? Without that they cannot know if what they did works, and they are a bit like humans, they try something and then they need to check if that does the right thing. Without a test cycle you definitely don’t get a lot out of LLMs.
The bigger story here is not that they forgot to tell the LLM to run tests, it's that agentic use has been so normalized and overhyped that an entire PR was attempted without any QA. Even if you're personally against this, this is how most people talk about agents online.
You don't always have the privilege of working on a project with tests, and rarely are they so thorough that they catch everything. Blindly trusting LLM output without QA or Review shouldn't be normalized.
You should be reviewing everything that touches your codebase regardless of source.
It's not hard to find comments from people vibe coding apps without understanding the code, even apps handling sensitive data. And it's not hard to find comments saying agents can run by themselves.
I mean people are arguing AGI is already here. What do you mean who is normalizing this?
And if you want to try... well you get what you get!
But again, no one who is serious about their business and serious about building useful products is doing this.
While this is potentially true for software companies, there are many companies for which software or even technology in general is not a core competency. They are very serious about their very useful products. They also have some, er, interesting ideas about what LLMs allow them to accomplish.
Where is everyone working where they can just ship broken code all the time?
I use LLMs for hours, every single day, yes sometimes they output trash. That’s why the bottleneck is checking the solutions and iterating on them.
All the best engineers I know, the ones managing 3-4 client projects at once, are using LLMs nonstop and outputting 3-4x their normal output. That doesn’t mean LLMs are one-shotting their problems.
You are overlooking a blind spot, that is increasingly becoming a weakness for devs. You assume that businesses care that their software actually works. It sounds crazy from the dev side but they really don't. As long as cash keeps hitting accounts the people in charge MBAs do not care how it gets there and the program to find that out only requires one simple unmistakable algo Money In - money out.
evidence
Spreadsheets. These DSL lite tools are almost universally known to be generally wrong and full of bugs. Yet, the world literally runs on them.
Lowest bidder outsourcing. Its well known that various low cost outsourcing produces non functional or failed projects or projects that limp along for years with nonstop bug stomping. Yet business is booming.
This only works in a very rich empire that is in the collapse/looting phase. Which we are in and will not change. See: History.
I once toured a dairy farm that had been a pioneer test site for Lasix. Like all good hippies, everyone I knew shunned additives. This farmer claimed that Lasix wasn't a cheat because it only worked on really healthy cows. Best practices, and then add Lasix.
I nearly dropped out of Harvard's mathematics PhD program. Sticking around and finishing a thesis was the hardest thing I've ever done. It didn't take smarts. It took being the kind of person who doesn't die on a mountain.
There's a legendary Philadelphia cook who does pop-up meals, and keeps talking about the restaurant he plans to open. Professional chefs roll their eyes; being a good cook is a small part of the enterprise of engineering a successful restaurant.
(These are three stool legs. Neurodivergents have an advantage using AI. A stool is more stable when its legs are further apart. AI is an association engine. Humans find my sense of analogy tedious, but spreading out analogies defines more accurate planes in AI's association space. One doesn't simply "tell AI what to do".)
Learning how to use AI effectively was the hardest thing I've done recently, many brutal months of experiment, test projects with a dozen languages. One maintains several levels of planning, as if a corporate CTO. One tears apart all code in many iterations of code review. Just as a genius manager makes best use of flawed human talent, one learns to make best use of flawed AI talent.
My guess is that programmers who write bad code with AI were already writing bad code before AI.
Best practices, and then add AI.
i wrote my own language, LLMs have been able to work with it at a good level for over a year. I don't do anything special to enable that - just front load some key examples of the syntax before giving the task. I don't need to explain concepts like iteration.
Also llm's can work with languages with unconventional paradigms - kdb comes up fairly often in my world (array language but also written right to left).
If this does appear to become a problem, is it not hard to apply the same RLHF infrastructure that's used to get LLMs effective at writing syntactically-correct code that accomplishes sets of goals in existing programming languages to new ones.
That would make sense if LLMs understood the domains and the concepts. They don't. They need a lot of training data to "map" the "knowledge transfer".
Personal anecdote: Claude stopped writing Java-like Elixir only some time around summer this year (Elixir is 13 years old), and is still incapable of writing "modern HEEX" which changed some of the templaring syntax in Phoenix almost two years ago.
More trial and error because trial is cheap, in the end less typing but hardly faster end results
But consider: as LLMs get better and approach AGI you won't need a corpus: only a specification.
In this way, AI may enable more languages, not less.
It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly.
I hope it’s not true, but I believe that’s what OP meant and I think the concern is valid!
- Simple semantics (e.g. easy to understand for developers + LLMs, code is "obviously" correct)
- Very strongly typed, so you can model even very complex domains in a way the compiler can verify
- Really good error messages, to make agent loops more productive
- [Maybe] Easily integrates with existing languages, or at least makes it easy to port from existing languages
We may get to a point where humans don't need to look at the code at all, but we aren't there yet, so making the code easy to vet is important. Plus, there's also a few bajillion lines of legacy code that we need to deal with, wouldn't it be cool if you could port (or at least extend it) it into some standardized, performant, LLM-friendly language for future development?
We're still in early days with LLMs! I don't think we're anywhere near the global optimum yet.
Isn't that what WASM is? Or more or less what is going on when people devise a new intermediate representation for a new virtual machine? Creating new assembly languages is a useful thing that people continue to do!
This isn't correct. It can compile to run on the BEAM: that is the Erlang VM. OTP isn't the Erlang VM; rather, "OTP is set of Erlang libraries and design principles providing middle-ware to develop [concurrent/distributed/fault tolerant] systems."
Gleam itself provides what I believe is a substantial subset of OTP support via a library: https://github.com/gleam-lang/otp
Importantly: "Gleam has its own version of OTP which is type safe, but has a smaller feature set. [vs. Elixir, another BEAM language with OTP support]"
The comment you are replying to is correct, and you are incorrect.
All OTP APIs are usable as normal within Gleam, the language is designed with it in mind, and there’s an additional set of Gleam specific additions to OTP (which you have linked there).
Gleam does not have access to only a subset of OTP, and it does not have its own distinct OTP inspired OTP. It uses the OTP framework.
The library the parent links to says this:
> Not all Erlang/OTP functionality is included in this library. Some is not possible to represent in a type safe way, so it is not included.
Does this mean in practice that you can use all parts of OTP, but you might lose type checking for the parts the library doesn't cover?
What's the state of Gleam's JSON parsing / serialization capabilities right now?
I find it to be a lovely little language, but having to essentially write every type three times (once for the type definition, once for the serializer, once for the deserializer) isn't something I'm looking forward to.
A functional language that can run both on the backend (Beam) and frontend (JS) lets one do a lot of cool stuff, like optimistic updates, server reconciliation, easy rollback on failure etc, but that requires making actions (and likely also states) easily serializable and deserializable.
But also, you shouldn’t think of it as writing the same type twice! If you couple your external API and your internal data model you are greatly restricting your domain modelling cability. Even in languages where JSON serialisation works with reflection I would recommend having a distinct definition for the internal and external structure so you can have the optimal structure for each context, dodging the “lowest common decimator” problem.
Hi, what do people use to generate them, I found gserde (edit: and glerd-json)
I wonder why this is preferred over codegen (during build), possibly using some kind of annotations?
I'm waiting for something similar to serde in Rust, where you simply tag your type and it'll generate type-safe serialization and deserialization for you.
Gleam has some feature to generate the code for you via the LSP, but it's just not good enough IMHO.
Could you point to a solution that provides serde level of convenience?
Edit: The difference with generating code (like with Gleam) and having macros generate the code from a few tags is quite big. Small tweaks are immediately obvious in serde in Rust, but they drown in the noise in the complete serialization code like with the Gleam tools.
To be fair, Rust's proc macros are only locally optimal:
While they're great to use, they're only okay to program.
Your proc-macro needs to live in another crate, and writing proc macros is difficult.
Compare this to dependently typed languages og Zig's comptime: It should be easier to make derive(Serialize, Deserialize) as compile-time features inside the host language.
When Gleam doesn't have Rust's derivation, it leaves for a future where this is solved even better.
We regularly collect feedback and haven’t got problems reported here, so your feedback saying otherwise would be a useful data point.
Thank you for Gleam btw, I do really like the rest of the language.
Take for example this section of the Gleam website FAQ section:
https://gleam.run/frequently-asked-questions/#how-does-gleam...
"Elixir has better support for the OTP actor framework. Gleam has its own version of OTP which is type safe, but has a smaller feature set."
At least on the surface, "but has a smaller feature set" suggests that there are features left of the table: which I think it would be fair to read as a subset of support.
If I look at this statement from the Gleam OTP Library `readme.md`:
"Not all Erlang/OTP functionality is included in this library. Some is not possible to represent in a type safe way, so it is not included. Other features are still in development, such as further process supervision strategies."
That quote leaves the impression that OTP is not fully supported and therefore only a subset is. It doesn't expound further to say unsupported OTP functionality is alternatively available by accessing the Erlang modules/functions directly or through other mechanisms.
In all of this I'll take your word for it over the website and readme files; these things are often not written directly by the principals and are often not kept as up-to-date as you'd probably like. Still even taking that at face value, I think it leaves some questions open. What is meant by supporting all of OTP? Where the documentation and library readme equivocates to full OTP support, are there trade-offs? Is "usable as normal" usable as normal for Erlang or as normal for Gleam? For example, are the parts left out of the library available via directly accessing the Erlang modules/functions, but only at the cost of abandoning the Gleam type safety guarantees for those of Erlang? How does this hold for Gleam's JavaScript compilation target?
As you know, Elixir also provides for much OTP functionality via direct access to the Erlang libraries. However, there I expect the distinction between Elixir support and the Erlang functionality to be substantially more seamless than with Gleam: Elixir integrates the Erlang concepts of typing (etc.) much more directly than does Gleam. If, however, we're really talking about full OTP support in Gleam while not losing the reasons you might choose Gleam over Elixir or Erlang, which I think is mostly going to be about the static typing... then yes, I'm very wrong. If not... I could see how strictly speaking I'm wrong, but perhaps not completely wrong in spirit.
> Elixir also provides for much OTP functionality via direct access to the Erlang libraries.
This is the norm in Gleam too! Gleam’s primary design constraint is interop with Erlang code, so using these libraries is straightforward and commonplace.
This can be just my lack of familiarity with the ecosystem though.
Gleam looks lovely and IMO is the most readable language that runs on the BEAM VM. Good job!
That aside, it is normal in Elixir to use Erlang OTP directly. Neither Elixir nor Gleam provides an entirely alternative API for OTP. It is a strength that BEAM languages call each other, not a weakness.
Here we are, having a technical discussion and here you are, shoving politics into it.
Elixir is slowly rolling out set-theoretic typing: https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.h...
Why use something complex and half working, when you can have the real thing?
BTW in the 90s people tried to come up with a type system for Erlang, and failed:
--- start quote ---
Phil Wadler[1] and Simon Marlow [2] worked on a type system for over a year and the results were published in [3]. The results of the project were somewhat disappointing. To start with, only a subset of the language was type-checkable, the major omission being the lack of process types and of type checking inter-process mes-sages. Although their type system was never put into production, it did result in a notation for types which is still in use today for informally annotating types.
Several other projects to type check Erlang also failed to produce results that could be put into production. It was not until the advent of the Dialyzer [4] that realistic type analysis of Erlang programs became possible.
https://lfe.io/papers/%5B2007%5D%20Armstrong%20-%20HOPL%20II...
--- end quote ---
[1] Yes, that Philip Wadler, https://en.wikipedia.org/wiki/Philip_Wadler
[2] Yes, that Simon Marlow, https://en.wikipedia.org/wiki/Simon_Marlow
[3] A practical subtyping system for Erlang https://dl.acm.org/doi/10.1145/258948.258962
Gleam can call any erlang function, and can somewhat handle the idc types. [ im sure it has another name ].
Did i miss something that gleam fails on, because this is one of my concerns.
- Limited OTP system messages. Gleam doesn't yet support all OTP system messages, so some OTP debugging messages are discarded by Gleam.
- Gleam doesn't have an equivalent of gen_event to handle event handlers.
- Gleam doesn't support DynamicSupervisor or the :simple_one_for_one for dynamically starting children at runtime.
Examples?
Positive:
- It can be pretty performant if you do it right. For example, with some thought I got many days down to double digit microseconds. That said, you do need to be careful how you write it and many patterns that work well in other languages fall flat in Gleam.
- The language server is incredibly good. It autoformats, autocompletes even with functions from not-yet-imported-but-known-to-the-compiler packages, shows hints with regarding to code style and can autofix many of these, autofills missing patterns in pattern matches, automatically imports new packages when you start using them and much much more. It has definitely redefined my view of what an LSP can do for a language.
- The language is generally a joy to work with. The core team has put a lot of effort into devex and it shows. The pipe operator is nice as always, the type system is no haskell but is expressive enough, and in general it has a lot of well-thought out interactions that you only notice after using it for a while.
Negative:
- The autoformatter can be a bit overly aggressive in rewriting (for example) a single line function call with many arguments to a function call with each argument on a different line. I get that not using "too much" horizontal space is important, but using up all my vertical space instead is not always better.
- The language (on purpose) focuses a lot on simplicity over terseness, but sometimes it gets a little bit much. Having to type `list.map` instead of `map` or `dict.Dict` instead `Dict` a hundred times does add up over the course of a few weeks, and does not really add a lot of extra readability. OTOH, I have also seen people who really really like this part of Gleam so YMMV.
- Sometimes the libraries are a bit lacking. There are no matrix libraries as far as I could find. One memoisation library had a mid-AoC update to fix it after the v1.0 release had broken it but nobody noticed for months. The maintainer did push out a fix within a day of realizing it was broken though. The ones that exist and are maintained are great though.
I do agree the language server is great. And it works in basically any IDE, which is another huge bonus.
With regards to having to type `list.map`, you actually don't need to! You can do this:
import gleam/list.{range, map}
import gleam/int
pub fn main() {
range(0,10) |> map(int.to_string) |> echo
}
Some libraries just aren't there, and I do wonder how hard it would be to port C libraries over. Something I want to play with! case x < 0 {
True -> ...
False ->
case x > 10 {
True -> ...
False ->
case x <= 10 {
True -> ...
False -> ...
}
}
} case x {
n if x < 0 -> ...
n if x > 10 -> ...
n if x <= 10 -> ...
}
Guards are a bit limited in that they cannot contain function calls, but that's a problem of the BEAM and not something Gleam could control.> Guards are a bit limited in that they cannot contain function calls,
I feel like that's not a small sacrifice.
> but that's a problem of the BEAM and not something Gleam could control.
Could Gleam desugar to a case expression like I wrote above?
EDIT: I am wrong. Apparently there are, but it's a bit of a strange thing where they can only be used as clauses in `if` statements, and without doing any calculations.
Was this the time of everything or just the time of your code after loading in the text file etc.? The hello world starter takes around 110 ms to run on my PC via the script generated with `gleam export erlang-shipment` and 190 ms with `gleam run`. Is there a way to make this faster, or is the startup time an inherent limitation of Gleam/the BEAM VM?
I did it in F# this year and this was my feeling as well. All of the List.map and Seq.filter would have just been better to be called off of the actual list or Seq. Not having the functions attached to the objects really hurts discoverability too.
However in my experience it's much better than the alternative - e.g. clang-format's default "binpack"ing of arguments (lay them out like prose). That just makes them hard to read and leads to horrible diffs and horrible merge conflicts.
To me it felt less elegant than Scheme (GNU Guile) which I usually use (with nice parallelism if I want to, pipelines, and also pattern matching), and, aside from syntax, conceptually perhaps less elegant than Erlang. On the other hand it has static typing.
I also tried OCaml this year, but there are issues in the ecosystem making a truly reproducible environment/setup, because opam doesn't produce proper lock files (only version numbers) and it seemed silly to not be able to even include another file, without reaching for dune, or having to specify every single file/module on command line for the OCaml compiler. So I was left unsatisfied, even though the language is elegant and I like its ML-ness. I wish there was a large ecosystem around SML, but oh well ...
Might be I should take another look at Erlang soon, or just finally get started with Haskell. Erlang using rebar3 should have proper lock files, has pattern matching, and if I remember correctly no such limitations for calling functions recursively. No longer sure how or whether Erlang did inner functions though.
I like Haskell in theory, but: just to get a hello world takes a lot of CPU and disk space. The standard library is full of exceptions (you can use a different prelude, that opens a whole different can of worms). The ergonomics of converting between the thousand different string types are awful.
So, you being basically me, I have some recommendations:
Idris (2): good stdlib, has dependent types. A beautiful language. The compiler is self-hosted and bootstrapped by lisp - very elegant! The ecosystem is basically nonexistent though.
PureScript: also improves on Haskell in many ways. But, it's more of a frontend language, and though you can do backend, you're stuck with JavaScript runtime. Oh well.
By the way, the number of partial functions is base that throw compiler warnings is increasing, for example:
https://hackage.haskell.org/package/base-4.21.0.0/docs/Prelu...
I hope it will increase further.
array
.slice(0, 10)
.filter(s => s[0].toLowerCase() < 'm') // a<->l
.map(s => s.toUpperCase());
It seems like it should be a common feature to be able to view between each array operation in a debugger without having to augment the code with `echo`The out of bounds handling didn't seem all that good to me. Sure you can filter out undefined. You could also just make a function that returns an empty array if out of bounds, or array of 1 element if not.
// JS
function getElemFromGrid(grid, x, y) {
return (x < 0 || x >= grid.width ||
y < 0 || y >= grid.height)
? []
: [grid.elems[y][x]]
}
...
neighbors = [
...getElemFromGrid(grid, x + 1, y + 0),
...getElemFromGrid(grid, x + 1, y + 1),
...getElemFromGrid(grid, x + 0, y + 1),
...getElemFromGrid(grid, x - 1, y + 1),
...getElemFromGrid(grid, x - 1, y + 0),
...getElemFromGrid(grid, x - 1, y - 1),
...getElemFromGrid(grid, x + 0, y - 1),
...getElemFromGrid(grid, x + 1, y - 1),
]
I also find grids made of 2 dimensional array to be code small. An array of arrays is NOT A GRID as there is nothing enforcing the inner arrays to be the same length as each other. Also, in efficienct code it would be better to provide a 1 dimensional array and bounds. Now, out of bounds checks based on accessing outside the array won't work. You need to check actual bounds. Again giving preference using a helper list.map(fn(line) { line |> calculate_instruction })
Could be written list.map(calculate_instruction)
?And I wonder if Gleam + Lustre could become the new Elm.
I have bumped into "the Elm architecture" in other projects though and it was nice.
Just so no one misunderstands this. The creator (Evan) didn't get into, or start, any drama himself that I ever noticed. I'd argue he's a very chill and nice dude.
I've been on the edges of the community for probably a decade now (lurker), and all of the drama came from other people who simply didn't like the BDFL and slow releases strategy.
I'm a backend dev mostly and use Elm for all my frontend needs. Yes there are some things compiler-side that could be improved, but basically it's fine.
I appreciate not having to keep up with new releases!
I can't believe this is still up tbh. And I can't believe there's still people defending Elm's lack of development
> It’s true that there hasn’t been a new release of the Elm compiler for some time. That’s on purpose: it’s essentially feature-complete.
Last talk I saw by Evan Czaplicki (from the 2025 Scala Days conf) he seemed to be working on some sort of database language https://www.youtube.com/watch?v=9OtN4iiFBsQ
Why? (I'm one such person defending Elm's lack of development)
Right now I’m toying with the idea of building a GNOME application in rust, and the framework I’m using is relm4 which provides elm like abstractions over gtk-rs.
Previously I’ve built web applications with F# and elmish, which again provides elm like abstractions for building F# applications.
What are you lacking in ruby and rails, besides the types?
Seems to be a filesystem, how would it replace a database?
I'm hoping it succeeds and gets bigger because I really like its ergonomics.
The advantage rather for llms in strongly typed languages is that compilers can catch errors early and give the model early automated feedback so you don’t have to.
With weakly typed (and typically interpreted) languages they will need to run the code which maybe quite slow to do so or not realistic.
Simply put agentic coding loops prefer stronger static analysis capabilities.
also, some nonstatic languages have a habit of having least surprise in their codebases -- it's often possible to effectively guess the types flowing through at the callsite. zero refactoring feedback necessary is better than even one.
The only problem I’ve ever had was on maybe 3 total occasions it’s added a return statement, I assume because of the syntax similarity with ruby
But those are exactly the same mistakes most humans make when writing bash scripts, which makes them inherently flaky.
Ask it to write code in a language with types, a “logical” syntax where there are no tricky gotchas, with strict types, and a compiler which enforces those rules, and while LLMs struggle to begin with, they eventually produce code which is nearly clean and bug free. Works much better if there is an existing codebase where they can observe and learn from existing patterns.
On the other hand asking them to write JavaScript and Python, sure they fly, but they confidently implement code full of hidden bugs.
The whole “amount of training data” is completely overblown. I’ve seen code do well even with my own made up DSL. If the rules are logical and you explain the rules to it and show it existing patterns, the can mostly do alright. Conversely there is so much bad JavaScript and Python code in their training data that I struggle to get them to produce code in my style in these languages.
Contrast with the likes of Swift - been around for years but it’s so bloated and obscure that coding agents (not just humans) have problems using it fully.
And those people are the people that develop the body of material that later people (and now LLMs) learn from.
I have similar concerns to you - how well a language works with LLMs is indeed an issue we have to consider. But why do you assume that it's the volume of training data that drives this advantage? Another assumption, equally if not more valid IMO, is that languages which have fewer, well-defined, simpler constructs are easier for LLMs to generate.
Languages with sprawling complexity, where edge cases dominate dev time, all but require PBs of training data to be feasible.
Languages that are simple (objectively), with a solid unwavering mental model, can match LLMs strengths - and completely leap-frog the competition in accurate code gen.
A language doesn't have to be unique to still have a particular taste associated with its patterns and idioms, and it would unfortunate if LLM influence had the effect of suppressing the ability for that new style to develop.
it seems semi intuitive to me that a typesafe, functional programming language with referential transparency would be ideal if you could decompose a program to small components and code those.
once you have a referentially transparent function with input/out tests you can spin on that forever until its solved and then be sure that it works.
But I think two things really hold it back:
* it's verbose.
* they compose awkwardly.
Neither of these are showstoppers, but I think fixing these problems--maybe with something like syntax-level support--could really lead to a beautiful programming language.
Elixir does have protocols, but they are extremely limited compared to type classes, traits, etc, and they're uncommonly used compared to writing concrete code.
Both of type classes and interfaces desugar to high order functions, so anything you write with them can be written with first class functions, though with a less concise API.
Of course dynamic dispatch can be implemented in almost every language. The Linux kernel uses dynamic dispatch with C!
But that's a hack, not a language feature.
PHP has interfaces and whatnot, but a lot of the time I do polymorphism by just having a class that has Closure members. When you can arbitrarily pass around functions like that, it's basically equivalent to an interface or abstract class, with a bit more flexibility.
> But you cannot do [first, ..middle, last].
I don't think you're supposed to do that! It's probably expensive!
document.body.style.setProperty('font-variant-ligatures','none','important');By the developers own action of adding generics ultimately the golang team admits they were wrong or that generics are better. If gleam gets popular I think much of the same will occur.
There’s simply too much repeated code without generics. I tried writing a parser combinator in gleam and it wasn’t pretty.
Codegen is more and more rare these days, because languages have so many tools to help you write less code - like generics. LLMs could, theoretically, help you crank out similar repetitive implementations of things.
Gleam doesn’t support interfaces. Not generics. You are completely right.
Haskell allows both sorts of generics. In Haskell parlance they call this higher-kinded polymorphism and the generic version of map they call fmap (as a method of the class Functor).
Gauche has a generic sequence interface which is great, and it's one of the reasons as a Python user I like Gauche as my "daily driver" Scheme.
It might be the same with gleam, with first version in 2019 and 1.0 in 2024. The language authors might think they are either uneeded and lead to anti patterns, or are waiting to see the best way to implement them.
Which feels super strange, but doesn't seem to really be a problem, e.g. imagine a language where you'd write
fun sum_all_numbers(Iterable<T> it) { it.fold(...) } # an interface
sum_all_numbers(a_list) # a list implements iterable
Gleam wants you to write fun sum_all_numbers(Iterator<T> it) { it.fold(...) } # a concrete type
sum_all_numbers(a_list.iterator) # get the caller to build the right object
edit: found an article that explained this a bit better https://mckayla.blog/posts/all-you-need-is-data-and-function...