I find C# can be a really good middle ground on the backend (not a blazor fan)... the syntax and expressiveness improves with every release. You can burrow as lot of patterns from the likes of Go as well as FP approaches. What I don't care for are excessively complex (ie: "Enterprise") environments where complexity is treated like a badge of honor instead of the burden of spaghetti that it is in practice.
Go is a bit unique a it has a really substantial stdlib, so you eliminate some of the necessary deps, but it's also trivial to rely on established packages like Tokio etc, vendor them into your codebase, and not have to worry about it in the future.
Its STD exists because Go is a language built around a "good enough" philosophy, and it gets painful once you leave that path.
In a decade or so Go the awkward things about Go will have multiplied significantly and it'll have many of the same problems Python currently has.
I was thinking of this quote from the article:
> Take it or leave it, but the web is dynamic by nature. Most of the work is serializing and deserializing data between different systems, be it a database, Redis, external APIs, or template engines. Rust has one of the best (de)serialization libraries in my opinion: serde. And yet, due to the nature of safety in Rust, I’d find myself writing boilerplate code just to avoid calling .unwrap(). I’d get long chain calls of .ok_or followed by .map_err. I defined a dozen of custom error enums, some taking other enums, because you want to be able to handle errors properly, and your functions can’t just return any error.
I was thinking: This is so much easier in Haskell.
Rather than chains of `ok_or()` and `map_err()` you use the functor interface
Rust:
``` call_api("get_people").map_or("John Doe", |v| get_first_name(v)).map_or(0, |v| get_name_frequency(v)) ```
Haskell:
``` get_first_name . get_name_frequency <$> callApi "get_people" ```
It's just infinitely more readable and using the single `<$>` operator spares you an infinite number of `map_or` and `ok_or` and other error handling.
However, having experience in large commercial Haskell projects, I can tell you the web apps also suffer from the dreaded dependency explosion. I know of one person who got fired from a project due to no small fact that building the system he was presented with took > 24 hours when a full build was triggered, and this happened every week. He was on an older system, and the company failed to provide him with something newer, but ultimately it is a failing of the "everything and the kitchen sink" philosophy at play in dependency usage.
I don't have a good answer for this. I think aggressive dependency reduction and tracking transitive dependency lists is one step forward, but it's only a philosophy rather than a system.
Maybe the ridiculous answer is to go back to php.
Edit: changed "perfect" to "improve", as I meant "perfect" as "betterment" not in terms of absolute perfection.
At least it is my experience building some systems.
Not sure it is always a good calculus to defer the hard thinking to later.
Rust on the other hand has "log" as a clear winner, and significantly less overall fragmentation there.
Letting an API evolve in a third-party crate also provides more accurate data on its utility; you get a lot of eyes on the problem space and can try different (potentially breaking) solutions before landing on consensus. Feedback during a Rust RFC is solicited from a much smaller group of people with less real-world usage.
But this does more than just add a maintenance burden. If the API can't be removed, architectural constraints it imposes also can't be removed.
e.g. A hypothetical API that guarantees a callback during a specific phase of an operation means that you couldn't change to a new or better algorithm that doesn't have that phase.
Realize the "log" api is bad? Make "log/slog". Realize the "rand" api is bad? Make "rand/v2". Realize the "image/draw" api is bad? Make "golang.org/x/image/draw". Realize the "ioutil" package is bad? Move all the functions into "io".
Te stdlib already has at least 3 different patterns for duplicating API functionality with minor backwards-incompatible changes, and you can just do that and mark the old things as deprecated, but support it forever. Easy enough.
Is that 'supported'? A library that uses a callback that exists in 'log' but not in 'slog'; it'll compile forever, but it'll never work.
'Compiles but doesn't work' does not count as stable in my book. It's honestly worse than removing the API: both break, but one of them is noticed when the break happens.
When I update the rust compiler, I do so with very little fear. My code will still work. The rust stdlib backwards compatible story has been very solid.
Updating the Go compiler, I also get a new stdlib, and suddenly I get a bunch of TLS version deprecation, implicit http2 upgrades, and all sorts of new runtime errors which break my application (and always at runtime, not compiletime). Bundling a large standard library with the compiler means I can't just update the tls package or just update the image package, I have to take it or leave it with the whole thing. It's annoying.
They've decided the go1 promise means "your code will still compile, but it will silently behave differently, like suddenly 'time1 == time2' will return a different result, or 'http.Server' will use a different protocol", and that's somehow backwards compatible.
I also find the go stdlib to have so many warts now that it's just painful. Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it. Don't use the non-context methods (like 'NewRequest' is wrong, use 'NewRequestWithContext', don't use net.Dial, etc), except for all the places context couldn't be bolted on.
Don't use 'image/draw', use 'golang.org/x/image/draw' because they couldn't fix some part of it in a backwards compatible way, so you should use the 'x/' package. Same for syscall vs x/unix. But also, don't use 'golang.org/x/net/http2' because that was folded into 'net/http', so there's not even a general rule of "use the x package if it's there", it's actually "keep up with the status of all the x packages and sometimes use them instead of the stdlib, sometimes use the stdlib instead of them".
Go's stdlib is a way more confusing mess than rust. In rust, the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus). In rust, updates to the stdlib are actually backwards compatible, not "oh, yeah, sha1 certs are rejected now if you update the compiler for better compile speeds, hope you read the release notes".
> Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it.
What in the standard library takes a logger at all? I don't think I've ever passed a logger into the standard library.
> the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus)
I've only seen slog since slog was added to the standard library. Pretty sure I've seen logrus or similar in the Kubernetes code, but that predated slog by a wide margin and anyway I don't recall seeing _any_ loggers in library code.
> In rust, the ecosystem has settled on one logging library interface
I mean, in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.
`net/http.Server.ErrorLog` is the main (only?) one, though there's a lot of third-party libraries that take one.
> I've only seen slog since slog was added to the standard library
Most go libraries aren't updated yet, in fact I can't say I've seen any library using slog yet. We're clearly interfacing with different slices of the go ecosystem.
> in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.
They all are still using the same error type, so it interoperates fine. That's like saying "In go, every library has its own 'type MyError struct { .. }' that implements error, so go has more standards because each package has its own concrete error types", which yeah, that's common... The rust libraries like 'thiserror' and such are just tooling to do that more ergonomically than typing out a bunch of structs by hand.
Even if one dependency in rust uses hand-typed error enums and another uses thiserror, you still can just 'match' on the error in your code or such.
On the other hand, in Go you end up having to carefully read through each dependency's code to figure out if you need to be using 'errors.Is' or 'errors.As', and with what types, but with no help from the type-system since all errors are idiomatically type-erased.
I was trying to build an HTML generator in Rust and got pretty far, but I don't think I'll ever be happy with the API unless I learn some pretty crazy macro stuff, which I don't want. For the latter project, the "innovation tokens" really rings true for me, I spent months on the HTML gen for not much benefit.
Edit to add: It might not be an imperative language, but having written some HTML and asked the computer to interpret it, the computer now has a programmed capability, determined by what was written, that's repeatable and that was not available apart from the HTML given. QED.
This can be a double edged sword. Yes, languages like python and typescript/JavaScript will let you not catch an exception, which can be convenient. But that also often leads to unexpected errors popping up in production.
but the truth is that Rust is not meant for everything. UI is an abstraction layer that is very human and dynamic. and i can come and say, “well, we can hide that dynamism with clever graph composition tricks” à la Elm, React, Compose, etc, but the machinery that you have to build for even the simplest button widget in almost every Rust UI toolkit is a mess of punctuation, with things like lifetimes and weird state management systems. you end up building a runtime when what you want is just the UI. that’s what higher level languages were made for. of course data science could be done in Rust as well, but is the lifetime of the file handle you’re trying to open really what you’re worried about when doing data analysis?
i think Rust has a future in the UI/graphics engine space, but you have to be pretty stubborn to use it for your front end.
There are real advantages to choosing a jack of all trades language for everything; for example it makes it easier for an engineer on one part of your project to help out on a different part of your project.
But it sounds like the OP didn't get any of the benefits of "jack of all trades", nor did he choose a field where Rust is "master of some".
> Similar thing can be said about writing SQL. I was really happy with using sqlx, which is a crate for compile-time checked SQL queries. By relying on macros in Rust, sqlx would execute the query against a real database instance in order to make sure that your query is valid, and the mappings are correct. However, writing dynamic queries with sqlx is a PITA, as you can’t build a dynamic string and make sure it’s checked during compilation, so you have to resort to using non-checked SQL queries. And honestly, with kysely in Node.js, I can get a similar result, without the need to have a connection to the DB, while having ergonomic query builder to build dynamic queries, without the overhead of compilation time.
I've used sqlx, and its alright, but I've found things much easier after switching to sea-orm. Sea-orm has a wonderful query builder that makes it feel like you are writing SQL. Whereas with sqlx you end up writing Rust that generates SQL strings, ie re-inventing query builders.
You also get type checking; define your table schema as a struct, and sea-orm knows what types your columns are. No active connection required. This approach lets you use Rust types for fields, eg Email from the email crate or Url from the url crate, which lets you constrain fields even further than what is easy to do at the DB layer.
ORMs tend to get a bad reputation for how some ORMs implement the active record pattern. For example, you might forget something is an active record and write something like "len(posts)" in sqlalchemy and suddenly you are counting records by pulling them from the DB in one by one. I haven't had this issue with sea-orm, because it is very clear about what is an active record and what is not, and it is very clear when you are making a request out to the DB. For me, it turns out 90% of the value of an ORM is the query builder.
And, IMO, making dynamic queries harder is preferable. Dynamic queries are inherently unsafe. Sometimes necessary, however you have to start considering things like sql injection attacks with dynamic queries.
This isn't to poo poo sea-orm. I'm just saying that sqlx's design choice to make dynamic queries hard is a logical choice from a safety standpoint.
[1] https://github.com/launchbadge/sqlx/issues/333#issuecomment-...
I've supported backends in typescript, python, Java, and Rust.
Rust pages me the least at night. Sleep is beautiful.
This whole paragraph is so true. The last couple of years have been pretty rough in Node land.
And even then I do it by serving JSON API's and not by serving HTML.
Typescript is pretty type-safe, and it's perfectly integrated with hot code reload, debuggers, and all the usual tools. Adding transpilation in that flow only creates friction.
That's also why things like Blazor are going nowhere. C# is nicer than Typescript, but the additional friction of WASM roundtrips just eats all the advantage.
I find the never type in TS actually being a proper bottom type + having control-flow based types vastly superior to what rust offers.
Idk, it just feels like OP chose all the wrong approaches with Rust, including using a separate language and ecosystem for the frontend, which is where most of the friction comes from. For example, Dioxus is a React clone that is somehow leagues better than React (and Next.js, too), and it has hot-reloading that brings compiles down to subsecond times, which makes building UI with it just as productive as with Node / Vite etc. I use it for server side code as well and it's great. Compilation times can be an issue with Rust, it's something I miss from Go, but there are ways to improve on it, and just being smart about what deps you include, avoiding overuse of macros etc can make a difference. I know these things were not around when OP started using Rust for their application, but they are around now.
Node and TS are quite frankly inferior to Rust in most ways. Bad language, ecosystem full of buggy unmaintained packages with the worse security profile of all the common languages, no unified build tooling that seems to break your project every 6 months, constant churn of blessed frameworks and tools, an stdlib that is not much more comprehensive than Rust's and outright broken in some ways, at least three different approaches to modules (esm, commonjs, umd, and more...?), I could go on an on. There is a reason why everyone seemingly reinvents the wheel in that ecosystem over and over again -- the language and platform is fundamentally not capable of achieving peoples goals, and every solution developed comes with massive tradeoffs that the next iteration attempts to solve, but that just creates additional issues or regressions for future attempts to tackle.
I've been using Rust with Dioxus and was completely mind blown when I started with it. With barely knowing any Rust (just React) I was able to jump right in and build with it, somehow it was more intuitive to me than most modern JS full stack frameworks. It seemingly already has most if not all of the features that similar JS frameworks have been developing for years, and because it's written in Rust things like conditional compilation are built into the language instead of being a third party babel plugin. That helps to remove a ton of friction. And it's trivial to build those same apps for desktop and mobile as well, something that's basically not possible with the JS frameworks.
Even stuff like websockets, go try to implement a type safe web socket connection with a server and client in Next.js or Astro. You'll need a ws library, something like Zod for validation, etc. In Rust it's just:
#[derive(Serialize, Deserialize, Clone, Default)]
enum SocketMessage { Hello(id: i32) }
#[get("/api/ws")]
async fn web_socket(options: WebSocketOptions) -> Websocket<SocketMessage> {
options.on_upgrade(move |mut socket| async move {
while let Ok(msg) = socket.recv().await {
match msg { SocketMessage::Hello(id) => {} } // handle messages
}
})
}
fn App() -> Component {
let mut socket = use_websocket(web_socket);
rsx!{ button { onclick: move || socket.send(SocketMessage::Hello(42), "say hello" } }
}If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
That seems much more like the future than embracing Node... <emoji here>