[1] https://github.com/microsoft/CsWin32
[2] https://lowleveldesign.wordpress.com/2024/07/11/implementing...
[1] https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
The main source of confusion as to why some believe that NativeAOT prohibits this are libraries which perform unbound reflection in a way that isn't statically analyzable (think accessing a method by a computed string that the compiler cannot see and not annotating with attributes the exact members you would like to keep and compile the code for) or libraries which rely on reflection emit. But even reflection emit works for limited scenarios where runtime compilation is not actually required like constructing a generic method where argument is a class - there could only be a single generic instantiation of __Canon argument in this case, which can be emitted at compile time. You can even expect the reflection to work faster under NativeAOT - it uses a more modern pure C# implementation and does not need to deal with the fact that types can be added or removed at runtime.
That’s exactly what I do too.
I wish more people would talk about it. Thank you for such an interesting article!
It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.
Yes and no. Yes, almost all of the standard library collection are allocation heavy and it is still the dominate pattern in C#, so if you want to avoid the GC you need to avoid these and resort to building your own primitives based on Memory/Span. Which sucks.
However, you can use interfaces in a no GC world since you can constrain those interfaces to be structs or ref-structs and the compiler will enforce rules that prevent them from being boxed onto the GC heap.
Also of recent note, the JIT can now automagically convert simple gc-heap allocations into stack allocations if it can trivially prove they don't escape the stack context.
> It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.
It is a little know fact that you can actually swap out the GC of the runtime. So you could plug in a null implementation that never collects (at your own peril...)
As for a delete operator, you can just roll your own struct based allocation framework that uses IDisposable to reclaim memory. But then you need to deal with all the traditional bugs like use-after-free and double-free and the like.
For me, I think low-gc is the happy medium. Avoid the heap in 99% of cases but let the GC keep things air tight
How do you do this? Just so I can have another tool in my tool shed. Googling got me to an archived repo on GitHub with a sample GC - which is enough but Wonder if there’s something off the shelf.
In java land, the Epsilon GC (a do nothing GC) enables a pattern that’s handy in perf test jobs in CI pipelines occasionally for some projects (I.e. run with epsilon but constrain max memory for the process - ci builds will fail if memory usage increases)
I am not aware of any production grade replacement GCs for .NET out there currently
I forgot that there is built in support for this model using the MemoryManager<T> class [0]. A memory manager is an abstract class that represents a block of other memory, including possibly unmanaged memory. It implements IDisposable already so you can just plug into this.
The Memory<T> struct can optionally internally point to a MemoryManager instance allowing you to plug your perfered style of allocation and freeing of memory into parts of the framework.
There is a little irony that a MemoryManager<T> is itself a class and therefore managed on the gc-heap, but you can defeat this by using ObjectPool<T> to recycle those instances to keep allocation count steady state and not trigger the GC.
I have used this before (in the toy database i mentioned earlier) to allocate aligned blocks of unmanaged memory.
[0] https://learn.microsoft.com/en-us/dotnet/api/system.buffers....
How?
I know of constraints on generic type parameters, but not how to do this. A cursory search is unhelpful.
e.g.
interface Foo {
int Calculate();
}
static void CalculateThing<T>(T impl)
where T: Foo {
var num = impl.Calculate() * 2;
Console.WriteLine(num);
}
Here if you pass a struct that implements 'Foo', 'CalculateThing' will be monomorphized and the dispatch will be zero-cost, same as in Rust.You can apply additional constraints like `where T: struct` or `allows ref struct`. The last one is a new addition which acts like a lifetime restriction that says that you are not allowed to box T because it may be a ref struct. Ref structs are for all intents and purposes regular structs that can hold so-called "managed references" aka byrefs, which have syntax 'ref T', which is discussed in detail by the article this submission links to (ref structs can also hold other ref structs, you are not limited in nesting, but you are limited in cyclicality).
As for delete operator, 'dispose' works well enough. I have a toy native vector that I use for all sorts of one-off tasks:
// A is a shorthand for default allocator, a thin wrapper on top of malloc/realloc/free
// this allows for Zig-style allocator specialization
using var nums = (NVec<int, A>)[1, 2, 3, 4];
nums.Add(5);
...
// underlying pointer is freed at the end of the scope
It is very easy to implement and I assume C and C++ developers would feel right at home, except with better UX.This retains full compatibility with the standard library through interfaces and being convertible to Span<T>, which almost everything accepts nowadays.
System-provided allocators are slower at small allocations than GC, but Jemalloc easily fixes that.
I missed this development! That was a big pain working with ref structs when they first came out.
List<int> nums = [1, 2, 3, 4];
//do stuff with nums
Delete(nums);
In addition, objects that hold references to other objects internally would need an implementation that would allow to traverse and recursively free references in a statically understood way. This gets nasty quick since a List<T> can hold, let's say, strings, which may or may not have other locations referring to them. Memory safety goes out of the window for dubious performance wins (not even necessarily, since this is where GC has better throughput).
I can recommend watching the lectures from Konrad Kokosa that go into the detail how .NET's GC works: https://www.youtube.com/watch?v=8i1Nv7wGsjk&list=PLpUkQYy-K8...
In my comment I already suggested a context where GC can be turned off. I said: "It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory."
Also there is C++ for that, if the goal is to use C# as C++.
This really is a PoC. You might get better results by using snippets as the inspiration for rolling something tailored to your specific use-case.
This breaks the fundamental assumptions built into pretty much every piece of software ever written in the language - it's a completely inviable option.
Incorporating a borrow checker allows for uncollected code to be incorporated without breaking absolutely everything else at the same time.
Unfortunately, as usual in computing, we have to do huge circles shaped in zig-zag, instead of adopting what was right in front of us.
Lots of zig-zags.
I am a firm believer that if languages like Java and C# had been like those languages that predated them, most likely C and C++ would have been even less relevant in the 2010's, and revisions like C++11 wouldn't have been as important as they turned out to be.
Also, can't miss the opportunity to bring up Graydon's iconic 2010 talk "Technology from the past come to save the future from itself". http://venge.net/graydon/talks/
It seems more relevant than ever to study the fundamental discoveries made in the early history of Windows. We don't know the magnitude of how much the rendering API affects the success of an operating system. It would be safe for a novel OS to heed the importance of the rendering API.
But more abstractly, it could be that the best novel OS competitor to Windows is simply the open-source flavor. It would be stronger evidence to see someone building Windows 1.0 in a modern sense, stronger than any other evidence, that a serious competitor is incoming.
The next OS won't be written in Javascript (sorry React).
So even those that weren't initially exposed in unsafe mode, were available at the MSIL level and could be generated via helper methods making use of "System.Reflection.Emit".
Naturally having them as C# language features is more ergonomic and safer than a misuse of MSIL opcodes.
In case anyone is interested, here is the spec about refs in structs and other lifetime features mentioned in the article:
https://github.com/dotnet/csharplang/blob/main/proposals/csh...
And here is the big list of ways .NET differs from the publish ECMA spec. Some of these differences represent new runtime features.
https://github.com/dotnet/runtime/blob/main/docs/design/spec...
Using C/C++/Rust to do the same task is probably more productive than emitting MSIL opcodes, so that solution wasn't really that practical.
But with these new features being more ergonomic and practical, it becomes cost effective to just do it in C# instead of introducing another language.
Also P/Invoke and CCW/RCW do have costs cross the runtime layer, even if minor when compared with other languages.
On NativeAOT, you can instead use "DirectPInvoke" which links against specified binary and relies on system loader just like C/C++ code would. Then, you can also statically link and embed the dependency into your binary (if .lib/.a is available) instead which will turn pinvokes into direct calls (marshalling if applicable and GC frame transition remain, on that read below).
Lastly, it is beneficial to annotate short-lived PInvoke calls with [SuppressGCTransition] which avoids some deoptimizations and GC frame transition calls around interop and makes the calls as cheap as direct calls in C + GC poll (a single usually not-taken branch). With this the cost of interop effectively evaporates which is one of the features that makes .NET as a relatively high-level runtime so good at systems programming.
Unmanaged function pointers have similar overhead, and identical if you apply [SuppressGCTransition] to them in the same way.
* LibraryImport is not needed if pinvoke signature only has primitives, structs that satisfy 'unmanaged' constraint or raw pointers since no marshalling is required for these.
If anything, article doesn't talk about MSIL or CLR, but C# language features. CLR is not the only target C# supports.
NativeAOT is supported in Avalonia (cross-platform UI framework), Razor Slices (dynamically render HTML from Minimal APIs) and I think there is also some support for AOT in MonoGame & FNA (game dev frameworks).
However, it's still early and a lot of the ecosystem doesn't support NativeAOT.
Native AOT depends on CLR infrastructure.
Is this right? I thought Rust's reason for XOR is deeper & is how it also guarantees memory safety for multi-threaded code too (& not just for reference lifetimes).
That’s not why though. There’s lots of reasons for Rusts safety model, such as allowing for vastly faster code because aliasing can’t happen unless both references are read only, in which case it doesn’t matter. There is lot to Rusts borrow rules that this article misses.
It’s like the article earlier today that was, essentially, “I don’t understand Rust and it would be better if it was Haskell”.
[1] https://kidneybone.com/c2/wiki/SufficientlySmartCompiler
Whether the aliasing argument holds water does not affect whether it was used as justification for Rust's design.
TLDR: 0-5% faster with noalias optimizations on.
You can always try running some benchmarks by building code with -Zmutable-noalias=no.
Other languages have long had aliasing, Fortran for one. C and C++ have the restrict keyword though obviously it's a programmer guarantee there and is less safe, since if the user of the function does pass the same memory ref offset for e.g. the optimisation is not safe.
I'd say in name only given that there were numerous aliasing bugs in llvm that only became visible when Rust tried to leverage it. I suspect similar pitfalls exist in every single C/C++ compiler because the rules for restrict are not only difficult to understand for use but also difficult to implement correctly.
(Otherwise, the Rust project wouldn't have encountered all the bugs related to aliasing analysis in LLVM.)
Take for e.g. this:
void add(double *A, double *B, double *C, int N) {
for(int i = 0; i < N; i++) {
C[i] = A[i] + B[i];
}
}
You generally wouldn't find many C developers sprinkling restrict in on functions like this, since that function could be useful to someone using add on two overlapping arrays.On the other hand, someone writing a ODE solver in a scientific code might write a function like this, where it would never make sense for the memory locations to overlap:
void RHS(double* restrict x, double* restrict xdot, int N, double dt) {
for(int i = 0; i < N; i++) {
xdot[i] = -x[i]/dt;
}
}
In those sorts of circumstances, it's one of the first performance optimisations you might reach for in your C/C++ toolkit, before starting to look at for e.g. parallelism. It's been in every simulation or mathematical code base I've worked on in 10+ years at various different academic institutions and industry companies.I'm sure there were probably others.
It's generally true that C/C++ code rarely if ever uses restrict & that Rust was the first to actually put any real pressure on those code paths and once it was found it took over a year to fix and it's incorrect to state that the miscompilation was only in code patterns that would only exist in Rust.
Because of two things mentioned in the article just below.
> Here we see C#’s first trade-off: lifetimes are less explicit, but also less powerful.
If C# is less powerful, it does not need powerful syntax. One does not need explicit lifetimes in Rust for a long time either, deduction work just fine.
> The escape hatch: garbage collection
If C# is ok with not tracking _all_ lifetimes _exactly_, it does not need powerful syntax. Not an option in Rust, by design.
Basically, not all code is possible to write, and not all code is as efficient.
The right move at this point would be to use an optional type, surely...
I wish the comments focused more on the subject of the article which is interesting and under-discussed.
I think you'll start seeing a lot more "cross platform C# frameworks" when PanGUI drops: https://pangui.io
It's a native layout/gui util by the devs of the mega-popular Odin extension in Unity, and the idea is to directly solve "good native c# gui lib" with the implementation just being a single shader and an API that is more like DearIMGUI.
I'm also planning on using it my own small 2D C# engine when it's available: https://github.com/zinc-framework
I already do iterative hot reload GUI with DearImGUI in that engine so PanGUI will work in the same way.
Unfortunately, the way they've designed it without accessibility in mind from the start means it's unlikely ever to be anything other than an after thought.
From there, you can do your front end in absolutely whatever (Svelte, Next, etc.) and your back end is the .NET host doing whatever. So it's basically making a "native webapp", not actually doing what Maui Blazor Hybrid does where it's opening a native context and injecting a webview (if I understand it correctly)
Quick nitpick: the find example could return a reference to a static variable, and thus avoid both the heavy syntax and the leaked allocation:
https://play.rust-lang.org/?version=stable&mode=debug&editio...
A related idea are the concept of second class references, as exist in Hylo. There the "ref" is not part of the type, but the way they work is very similar.
Lifetimes give you a lot of power but, IMO, I think languages that do this should choose between either being fully explicit about them, or going "second class" like C# and Hylo and avoiding lifetime annotations entirely.
Eliding them like Rust does can be convenient for experts but is actually a nightmare for newbies. For an example of a language that does explicit lifetimes without becoming unbearable, check out Austral.
Instead of C#'s scope ref solution to having a function accept and return multiple references, another option (in an imaginary language) would be to explicitly refer to the relevant parameters:
ref(b) double whatever(ref Point a, ref Point b) { return b.x; }
C++ has to be "best effort" because it tries to bolt these semantics onto the pre-existing reference types, which were never required to adhere to them. It can catch some obvious bugs but most of the time you'll get a pile of false positives and negatives.
The reason is this changes are not aimed on average Joe developer writing C# microservices. This changes and whole Span/ref dialect of C# are aimed on Dr. Smartass developer writing C# high performance libraries. It's advance-level feature.
Basically gives you a release-by-release highlight reel of what's changed and why it's changed.
I glance at it every release cycle to get an idea of what's coming up. The even numbered releases are LTS releases while the odd numbered releases (like the forthcoming 9) are short term. But the language and runtime are fairly stable now after the .NET Framework -> .NET Core turbulence and now runtime upgrades are mostly just changing a value in a file to select your target language and runtime version.
https://learn.microsoft.com/en-us/archive/msdn-magazine/2018...
Span makes working with large buffers easier for Joe developer, if he could be bothered to spend 20 seconds looking at the examples in the documentation.
But before span and friends you could always use pointers. Spans just make things friendlier.
And C# also has built-in SIMD libraries if you need to do some high performance arithmetic stuff.
My assumption is since there is a GC, and it is not native code, there are too many use cases where it can't apply, but rust can. Once there is a way to have it compete with rust in every use case rust can be used, maybe there will be more talk.
The "advanced" stuff is very much about bringing Rust-like lifetimes to the language and moving the powers and capabilities outside of the `unsafe` keyword world, by making it much less unsafe in similar ways to how Rust does lifetime/borrow-checking but converted to C#/CLR's classic type system. It's adding the "too clever" memory model of Rust to the much simpler memory model of a GC. (GCs are a very simple memory model invented ~70 years ago.)
Try int* bug() { int longlived = 12; int* plonglived = &longlived; { int shortlived = 13; plonglived = &shortlived; } return plonglived; }
With gcc -Wall -Werror
Excellent question
And I feel that Rust, by making it explicit, makes it harder and unergonomic on the developer
Why would you do that?
> n fact, this is so common that Rust doesn’t require you to write the lifetimes explicitly
This is an actual _pattern_? Yikes^2.
a getter?
> This is an actual _pattern_? Yikes^2.
wat.
Getters return values. This returns a pointer. So it's an accessor. With unchecked semantics. It's bizzare to me that anyone would use this technique. It's all downside with no upside.
> wat.
I'm expressing surprise that anyone would do this. I'm sure you were capable of understanding that.
When I use getter, I want to see the value of a field. I don't want an owned copy of said value, I just want to look at it, so returning reference makes _a lot more_ sense than returning a copy. In example it uses `i32`, but that's just for readability.
> I'm expressing surprise that anyone would do this. I'm sure you were capable of understanding that.
Yes, and I'm expressing surprised that you think it's bad. I'm not even sure what is bad? Lifetime elision that is well documented and works in a non-ambiguous manner? Using references instead of values? Do we need to memcpy overything now to please you?
You can look at it with an owned copy. What is the issue? Is premature optimization the default mode in writing Rust? You don't see the issues with this?
> I'm expressing surprised that you think it's bad
You're surprised that someone simply has a different opinion? Your reaction failed to convey that.
uhm, common sense isn't a premature optimization. Avoiding a needless copy is the default mode in writting rust and any other language.
This isn't exactly a pointer: Rust distinguishes between read-only and mutable ("exclusive") references.
This returns a read-only reference, so it's very much like a getter: you cannot use it to modify the thing it points to.
It's just that it does it without a copy, which matters for performance in some cases.
Instead, its growth was stunted and many people avoid it even though it is an excellent language.
Because Anders Hejlsberg is one of the greatest language architects and the C# team are continuing that tradition.
The only grudge I have against them is they promised us discriminated unions since forever and they are still discussing how to implement it. I think that is the greatest feature C# is missing.
For the rest C# is mostly perfect. It has a good blend of functional and OOP, you can do both low level and high level code. You can target both the VM or the bare hardware. You can write all types of code beside system programming (due to the garbage collector). But you can do web backend, web front-end, services, desktop, mobile apps, microcontroller stuff, games and all else. It has very good libraries and frameworks for whatever you need. The experience with Visual Studio is stellar.
And the community is great. And for most domains there is generally only one library or framework everybody uses so you not only don't have to ask what to use for a new feature or project, but you also find very good examples and help if you need.
It feels like a better, more strait trough version of Java, less verbose and less boiler plate-y. So that's why .NET didn't need its own Kotlin.
Sure, it can't meet the speed of Rust or C# for some tasks because of the garbage collector. But provided you AOT compule, disable the garbage collector and do manual memory management, it should.
.NET has moved to being directly cross-platform today and is great at server/console app cross-platform now, but its support for cross-platform UI is still relatively nascent. The official effort is called MAUI, has mostly but not exclusively focused on mobile, and it is being developed in the open (as open source does) and leaves a lot to be desired, including by its relatively slow pace compared to how fast the server/console app cross-platform stuff moves. The Linux desktop support, specifically, seems constantly in need of open source contributors that it can't find.
You'll see a bunch of mentions of third-party options Avalonia and Uno Platform doing very well in that space, though, so there is interesting competition, at least.
.NET has some small cross platform abilities, but calling it totally cross platform is wrong.
Operating Systems: Linux, macOS, Windows, FreeBSD, iOS, Android, Browser
Architectures: x86, x86_64, ARMv6, ARMv7, ARMv8/ARM64, s390x, WASM
Notes:
Mono as referred here means https://github.com/dotnet/runtime/tree/main/src/mono which is an actively maintained runtime flavor, alongside CoreCLR.
- Application development targets on iOS and Android use Mono. Android can be targeted as linux-bionic with regular CoreCLR, but it's pretty niche. iOS has experimental NativeAOT support but nothing set in stone yet, there are similar plans for Android too.
- ARMv6 requires building runtime with Mono target. Bulding runtime is actually quite easy compared to other projects of similar size. There are community-published docker images for .NET 7 but I haven't seen any for .NET 8.
- WASM also uses Mono for the time being. There is a NativeAOT-LLVM experiment which promises significant bundle size and performance improvements
- For all the FreeBSD slander, .NET does a decent job at supporting it - it is listed in all sorts of OS enums, dotnet/runtime actively accepts patches to improve its support and there are contributions and considerations to ensure it does not break. It is present in https://www.freshports.org/lang/dotnet
At the end of the day, I can run .NET on my router with OpenWRT or Raspberry Pi4 and all the laptops and desktops. This is already quite a good level given it's completely self-contained platform. It takes a lot of engineering effort to support everything.
In fairness this ignores a lot of embedded work.
Java gets to cheat here a bit because they have some custom embedded stuff, but they are also not actually running on all CPUs.
(Jk I love C#)
There's a lot of options, but also the latest of .NET (not Framework) just runs natively on Linux, Mac and Windows, and there's a few open source UI libraries as mentioned by others like Avalonia that allow your UI to run on any OS.
if building for the web online, asp.net core runs on Linux servers as well as windows
and there's MAUI [2] ( not a fan of this), you are better-off with with the others.
in summary c# and .NET is cross-platform, third party developers build better frameworks and tools for other platform while Microsoft prefers to develop for Microsoft ecosystem, if you get
[0] https://avaloniaui.net/ [1] https://platform.uno/ [2] https://learn.microsoft.com/en-us/dotnet/maui/what-is-maui?v...
I will say MS has been obsessed with trying to take a slice of the mobile pie.
However their Xamarin/WPF stuff left so much to be desired and was such a Jenga Tower that I totally get the community direction to go with a framework you have ostensibly have more control over vs learning that certain WPF elements are causes of e.g. memory leaks...
I work at one of the few startups that uses C# and .NET.
Dev machines are all M1/M3 MacBook Pros and we deploy to a mix of x64 and Arm64 instances on GCP and AWS.
I use VS Code on macOS while the rest of the team prefers Rider.
Zero friction for backend work and certainly more pleasant than Node. (We still use Node and JS for all front-end work).
Mono was a third party glorified hack to get C# to work on other OS. .NET has been natively cross platform with an entirely new compiler and framework since mid 2016.
Indeed, this is what I didn't like back then. Java has official support for other OSes, which C# was lacking at the time. Good to hear that things changed now.
IL2CPP, Unity's C# to C++ compiler, does not help for any of this. It just allows Unity to support platforms where JIT is not allowed or possible. The GC is the same if using Mono or IL2CPP. The performance of code is also roughly identical to Mono on average, which may be surprising, but if you inspect the generated code you'll see why [2].
[1] https://xoofx.github.io/blog/2018/04/06/porting-unity-to-cor... [2] https://www.jacksondunstan.com/articles/4702 (many good articles about IL2CPP on this site)
https://discussions.unity.com/t/coreclr-and-net-modernizatio...
C# it's plenty fast for game programming.
The developers of Risk of Rain 2 were undoubtedly aware of the hitches, but it interfered with their vision of the game, and affected users were left with a degraded experience.
It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.
This is true, but developer productivity also informs the feature set.
A game could support all possible features if written carefully in bare metal C. But it would take two decades to finish and the company would go out of business.
Game developers are always navigating the complex boundary around "How quickly can I ship the features I want with acceptable performance?"
Given that hardware is getting faster and human brains are not, I expect that over time higher level languages become a better fit for games. I think C# (and other statically typed GC languages) are a good balance right now between good enough runtime performance and better developer velocity than C++.
They probably create too much garbage. It’s equally easy to slow down C++ code with too many malloc/free functions called by the standard library collections and smart pointers.
The solution is the same for both languages: allocate memory in large blocks, implement object pools and/or arena allocators on top of these blocks.
Neither C++ nor C# standard libraries have much support for that design pattern. In both languages, it’s something programmers have to implement themselves. I did things like that multiple time in both languages. I found that, when necessary, it’s not terribly hard to implement that in either C++ or C#.
I think this is where the difference between these languages and rust shines - Rust seems to make these things explicit, C++/C# hides behind compiler warnings.
Some things you can't do as a result in Rust, but really if the rust community cares it could port those features (make an always stack type type, e.g.).
Code base velocity is important to consider in addition to dev velocity, if the code needs to be significantly altered to support a concept it swept under the rug e.g. object pools/memory arenas, then that feature is less likely to be used and harder to implement later on.
As you say, it's not hard to do or a difficult concept to grasp, once a dev knows about them, but making things explicit is why we use strongly typed languages in the first place...
In this game's case though they possibly didn't do much optimization to reduce GC by pooling, etc. Unity has very good profiling tools to track down allocations built in so they could have easily found significant sources of GC allocations and reduced them. I work on one of the larger Unity games and we always profile and try to pool everything to reduce GC hitches.
GC can work or not when writing a game engine. However everybody who writes a significant graphical game engine in a GC language learns how to fight the garbage collector - at the very least delaying GC until between frames. Often they treat the game like safety critical: preallocate all buffers so that there is no garbage in the first place (or perhaps minimal garbage). Without garbage collection might technically use more CPU cycles, but in general they are spread out more over time and so more consistent.
You have to jump through some hoops but it's really not that convoluted and miles easier than good C++.
I wish there was an attribute in C# that was "[MustNotAllocate]" which files the compilation on known allocations such as these. It's otherwise very easy to accidentally introduce some tiny allocation into a hot loop, and it only manifests as a tiny pause after 20 minutes of runtime.
Even when allocations happen, .NET is much more tolerant to allocation traffic than, for example, Go. You can absolutely live with a few allocations here and there. If all you have are small transient allocations - it means that live object count will be very low, and all such allocations will die in Gen 0. In scenarios like these, it is uncommon to see infrequent sub-500us GC pauses.
Last but not least, .NET is continuously being improved - pretty much all standard library methods already allocate only what's necessary (which can mean nothing at all), and with each release everything that has room for optimization gets optimized further. .NET 9 comes with object stack allocation / escape analysis enabled by default, and .NET 10 will improve this further. Even without this, LINQ for example is well-behaved and can be used far more liberally than in the past.
It might sound surprising to many here but among all GC-based platforms, .NET gives you the most tools to manage the memory and control allocations. There is a learning curve to this, but you will find yourself fighting them much more rarely in performance-critical code than in alternatives.
That being said, .NET includes lots of performance-focused analyzers, directing you to faster and less-allocatey equivalents. There surely also is one on NuGet that could flag foreach over a class-based enumerator (or LINQ usage on a collection that can be foreach-ed allocation-free). If not, it's very easy to write and you get compiler and IDE warnings about the things you care about.
At work we use C# a lot and adding custom analyzers ensuring code patterns we prefer or require has been one of the best things we did this year, as everyone on the team requires a bit less institutional knowledge and just gets warnings when they do something wrong, perhaps even with a code fix to automatically fix the issue.
It's really not that hard to structure a game that pre-allocates and keeps per frame allocs at zero.
Unity used Mono. Which wasn't the best C# implementation, performance wise. After Mono changed its license, instead of paying for the license, Unity chose to implement their infamous IL2CPP, which wasn't better.
Now they want to use CoreCLR which is miles better than both Mono and IL2CPP.
Also, if you invoke GC intentionally at convenient timing boundaries (I.e., after each frame), you may observe that the maximum delay is more controllable. Letting the runtime pick when to do GC is what usually burns people. Don't let the garbage pile up across 1000 frames. Take it out every chance you get.
Manually invoking GC many times per second is a viable approach?
You're basically trading off worse throughput for better latency.
If you forcibly run the GC every frame, it's going to burn cycles repeatedly analyzing the same still-alive objects over and over again. So the overall performance will suffer.
But it means that you don't have a big pile of garbage accumulating across many frames that will eventually cause a large pause when the GC runs and has to visit all of it.
For interactive software like games, it is often the right idea to sacrifice maximum overall efficiency for more predictable stable latency.
It might be more useful to use OSU! approach as a reference: https://github.com/dotnet/runtime/issues/96213#issuecomment-...
OSU! represents an extreme case where the main game loop runs at 1000hz, so for much more realistic ~120hz you have plenty of options.
Magic, code or otherwise, sucks when the spell/library/runtime has different expectations than your own.
You expect levitation to apply to people, but the runtime only levitates carbon based life forms. You end up levitating people without their affects (weapons/armor), to the embarrassment of everyone.
There should be no magic, everything should be parameterized, the GC is a dangerous call, but it should be exposed as well (and lots of dire warnings issued to those using it).
If you have a bunch of objects in an array that you have a reference to such that you can pass it, then, by definition, those objects are not garbage, since they're still accessible to the program.
There should be some middle ground between RAII and invoking Dispose/delete and full blown automatic GC.
The article discusses ref lifetime analysis that does have relationship with GC, but it does not force you into using one. Byrefs are very special - they can hold references to stack, to GC-owned memory and to unmanaged memory. You can get a pointer to device mapped memory and wrap it with a Span<T> and it will "just work".
AFAIK it has been possible to replace the GC with alternative implementation for the past few years, but no one has made one yet.
EDIT: Some experimental alternative GC implementations:
https://github.com/kkokosa/UpsilonGC
https://www.codeproject.com/Articles/5372791/Implementing-a-...
> Unity devs run into
So it's viable but not perfect
They also have a C# subset called Burst, which could have been avoided if they were using .NET Core.
BUT it's definitely not a language designed for no-gc so there are footguns everywhere - that's why Rider ships special static analysis tools that will warn you about this. So you can keep GC out of your critical paths, but it won't be pretty at that point. But better than Java :D
Possibly prettier than C and C++ still. Every time I write something and think "this could use C" and then I use C and then I remember why I was using C# for low-level implementation in the first place.
It's not as sophisticated and good of a choice as Rust, but it also offers "simpler" experience, and in my highly biased opinion pointers-based code with struct abstractions in C# are easier to reason about and compose than more rudimentary C way of doing it, and less error-prone and difficult to work with than C++. And building final product takes way less time because the tooling is so much friendlier.
To ease the wait you could try Dunet (discriminated union source generator).
Practical example in a short write up here: https://chrlschn.dev/blog/2024/07/csharp-discriminated-union...
if (s is string or s is int) { // what's the type of s here? is it "string | int" ? }
And not to mention that the BCL should probably get new overloads using DU's for some APIs. But there is at least a work in progress now, after years of nothing.
In order to submit bugs with Microsoft the application redirects the end user to their website with web socket. The company I work for has extra security and this breaks preventing me from filing a cornucopia of bugs with Visual Studio. I cannot even file a bug on how the submit system is broken with Visual Studio.
Closing and re-opening Visual Studio is a daily task. Most often during code refactoring of multiple parts. Creating new classes has inconsistency of template usage in the second most recent released version. Compile error message history can become stale and inconsistent where the output console does not. Pasting content into the resource manager is still broken during tab entry. Modal dialogs still cover the screen during debug. And those don't even touch the inconstant and buggy user experience.
C# is a tool and like all tools it is good for some things and really bad for others. No tool is perfect. You can still use a ball-peen hammer for roofing but be better to have a roofing hammer. I would use Swift on iOS and Kotlin on Android for those platform projects, I don't even know those languages, and wouldn't use C#.
I assume you mean just the Windows Visual Studio? The Mac version is not exactly on par with the Windows. Yeah C# is great, but one would need Window's version of VS (NOT VS Code) to take full advantage of C#. For me that is a deal breaker, when the DX of a language is tight to a proprietary sourced IDE by MS.
https://blog.jetbrains.com/blog/2024/10/24/webstorm-and-ride...
[edit: I’ll note I’ve used successfully both Win and Linux]
So it seems at least that part of your critique is outdated.
I'm not sure what you mean about the inference, I've never had any problem with that that I can remember. And it can be a bit slow to start up or analyze a project at first load but in return it gives much better code completion and such.
I have been learning F# for a while now, and while the functional side that is pushed heavily is a joy to use, anything that touches the 'outside world' is going to have way more resources for C# as far as libraries, official documentation, general information including tutorials etc. You will need to understand and work with those.
So you really do need to understand C# syntax and semantics. Additionally there are a few concepts that seem the same in each language but have different implementations and are not compatible (async vs tasks, records) so there is additional stuff to know about when mentally translating between C# and F#.
I really want to love F# but keep banging my head against the wall. Elixir while not being typed yet and not being as general purpose at least allows me to be productive with it's outstanding documentation, abundance of tutorials and books on both the core language and domain specific applications. It is also very easy to mentally translate erlang to elixir and vice versa in the very few occasions needed.
The flipside is that adopting F# is less risky as a result - if there isn't a library or you are stuck you can always bridge to these .NET libraries. Its similar I think with other shared runtime languages (e.g. Scala, Kotlin, Clojure, etc). You do need to understand the ecosystem as a whole at some point and how it structures things.
Yeah. What's your opinion on Gleam?
While it's good to have the escape hatch; because it means its less of a risk to adopt F# (i.e. you will always have the whole .NET ecosystem at your finger tips) if the C# framework being adopted is complex (e.g. uses a lot of implicits) it requires good mentoring and learning to bridge the gap and usually at this point things like IDE support, mocking, etc that weren't needed as much before are needed heavily (like a typical C# code base). Many C# libraries are not that easy therefore IMO, but with C# native templates, etc it becomes more approachable if coming from that side.
I've found things like the differences in code structure, the introduction of things like patterns (vs F#'s "just functions" ideal), dependency injection, convention based things (ASP.NET is a big framework with lots of convention based programming) and other C# things that F# libraries would rather not have due to complexity is where people stumble. Generally .NET libraries are easy in F# - its frameworks that are very OOP that make people outside the C#/Java/OOP ecosystem pause a bit at least in my experience. There's good articles around libraries vs frameworks in the F# space if I recall illustrating this point.
I wish Anders was still in charge of C# :(
No, it isn't. The power of C++ templates is still astronomically far from C# generics.
Haskell promises to solve concurrency and the Rust boys are always claiming that it's impossible to write buggy code in Rust.. and the jump from C/C++/C#/Golang to Rust is much smaller than to Haskell..
Oh that's what I was getting at, that makes Rust pretty much a must-have tool to have in your tool-belt.
I'm not a templates/macro guy so I'm curious what's missing.
It's good that it is now, but how can it be implemented in a way that has truly separate instantiations of generics at runtime, when calls cross assembly boundaries? There's no single good place to generate a specialization when virtual method body is in one assembly while the type parameter passed to it is a type in another assembly.
There are no assembly boundaries under NativeAOT :)
Even with JIT compilation - the main concern, and what requires special handling, are collectible assemblies. In either case it just JITs the implementation. The cost comes from the lookup - you have to look up a virtual member implementation and then specific generic instantiation of it, which is what makes it more expensive. NativeAOT has the definitive knowledge of all generic instantiations that exist, since it must compile all code and the final binary does not have JIT.
Sorry for the snark, but I do think C# compile are just barely acceptable for me, so I'm happy they aren't adding more heavy compile time features.
No! It misses "typedef", both at module API level and within generics.
If you are looking at this through the lens of HN, I think much of this can be attributed to a certain ideological cargo cult that actively seeks to banish any positive sentiment around effective tools. You see this exact same thing with SQL providers, web frameworks, etc. If the tool is useful but doesn't have some ultra-progressive ecosystem around it (i.e., costs money or was invented before the average HN user's DOB), you can make a winning bet that talking about it will result in negative karma outcomes.
Everyone working in enterprise software development has known about the power of this language for well over a decade. But, you won't find a single YC startup that would admit to using it.
I suspect it is less about cargo culting, and more about two separate things:
First, the tooling for C# and really anything dotnet has been awful on any OS other than Windows until fairly recently. Windows is (to be blunt) a very unpopular OS in every development community that isn't dotnet.
Second, anthing enterprise is worth taking with a skeptical grain of salt; "enterprise" typically gets chosen for commercial support contracts, vendor lock-in, or astronaut architects over-engineering everything to fit best practices from 20 years ago. Saying big businesses run on it is a virtue is akin to saying that Oracle software is amazing or that WordPress engineering is amazing because so many websites run on it. Popularity and quality are entirely orthogonal.
I suppose there is probably another reason, which is the cluster fuck that has been the naming and churn of dot net versions for several years. ASP.NET, then core, then the core suffix got dropped at version 5, even though not everything was cross platform... So much pointless confusion.
My only issue with many of the improvements in C# is that all of them are optional for backwards compatibility reasons. People who don't know or don't care about new language features can still write C# like it's 2004 and all of the advantages of trying to modernize go out of the window. That means that developers often don't see the need to learn any of the new features, which makes it hard for projects to take advantage of the language improvements.
Instead of new platform libs and compilers simply defaulting to some reasonable cutoff date and saying "You need to install an ancient compiler to build this".
There is nothing that prevents me from building my old project with an older set of tools. If I want to make use of newer features then I'm happy to continuously update my source code.
Some examples of companies/products not implementing backwards compatibility are Delphi and Angular. Both are effectively dead. .NET Core wasn't backwards compatible with .NET Framework, but MS created .NET Standard to bridge that gap. .NET Standard allows people to write code in .NET core and will run in .NET Framework. It's not perfect, but apparently it was good enough.
Companies usually won't knowingly adopt a technology that will be obsoleted in the future and require a complete rewrite. That's a disaster.
But the compiler only consumes syntax (C#11, C#12 C#13 and so on) so I don't see why the compiler that eats C#13 necessarily must swallow C#5 without modification
public Patient Patient { get; set; }
The same thing with modern code would be public Patient? Patient { get; set; }
Because with the new C#, objects are by default not null. Fortunately there is a compiler flag to turn this off, but it's on by default.As a guy who has worked in C# since 2005, a breaking change would make me pretty irate. Backwards compatibility has its benefits.
What issues do you have with backwards compatibility?
As a class library example (which is contrary to what I said earlier about .NET compatibility vs C# compatibility) is that it was a massive mistake to let double.ToString() use the current culture rather than the invariant culture. It should change to either required passing a culture always (breaking API change) or change to use invariantculture (behaviour change requiring code changes to keep old behavior)
I would imagine that's a carryover from the Win32/Client-Server days when that would have been a better choice.
Is that annoying? Yea. Is that annoying enough to force companies to collectively spend billions to look through their decades old codebases for double.ToString() and add culture arguments? Also keep in mind, this is a runtime issue, so the time to fix would be much more than if it were a compile issue. I would say no.
Just the move to Unicode (i.e. from 2007 to 2009) took some work, but otherwise I can't think of any intentional breaking changes...? In fact, it's one of the most stable programming environments I know of – granted, in part because of being a little stagnant (but not dead).
I've been using Delphi since Delphi 3. The only really breaking change I can recall was the Unicode switch. And that was just a minor blip really. Our 300kloc project at work took a couple of days to clean up the compiler errors and it's been Unicode-handling ever since. It's file integration and database heavy, so lots of string manipulation.
Most of my hobby projects didn't need any code changes.
In fact, the reason Delphi was late to the Unicode party was precisely because they spent so much time designing it to minimize impact on legacy code.
Not saying there hasn't been some cases, but the developers of Delphi have had a lot of focus on keeping existing code running fine. We have a fair bit of code in production that is decades old, some before y2k, and it just keeps on ticking without modification as we upgrade Delphi to newer versions.
The market has been ignoring Delphi for that long. It probably peaked with D5, once they changed their name from Borland to Inprise, it was over.
I hear it's still somewhat popular in Eastern European countries, but I heard that several years ago.
But is also not a trivial task.
I think it depends on location. In my part of the world .Net is something which lives in middle sized often stagnating companies. Enterprise around here is married to the JVM and they even tend to use more Typescript on the backend than C#. I’m not going to defend the merits of that in any way, that is just the way of things.
There being said I do get the impression that HN does know that Rust isn’t seeing much adoption as a general purpose language. So I wouldn’t count C# out here considering how excellent it has become since the transition into Core as the main .Net. I say this a an absolute C# hater by the way, I spent a decade with it and I never want to work with it again. (After decades of SWE I have fun with Python, C/Zig, JS/TS, and, no other language.)
Many developers already know Java, so it's easier to hire Java developers.
>There being said I do get the impression that HN does know that Rust isn’t seeing much adoption as a general purpose language. So I wouldn’t count C# out here considering how excellent it has become since the transition into Core as the main .Net. I say this a an absolute C# hater by the way, I spent a decade with it and I never want to work with it again. (After decades of SWE I have fun with Python, C/Zig, JS/TS, and, no other language.)
I didn't like the old C# and .NET. However, the new one is wonderful and I quite enjoy using it. More than Java or Go. On par with Python, but I wouldn't use Python for now for large web backend applications.
I tried Rust, bur for some reason I can't grow to like it. I'd prefer using C or Zig and even a sane subset of C++ (if such thing even exists).
Python is a horrible language, but it’s also the language I actually get things build in. I do think it’s a little underrated for large web apps since Django is a true work horse, but it takes discipline. C is for performance, embedded and Python/Typescript libraries and Zig is basically just better C because of the interoperability. Typescript is similar to Python for me, I probably wouldn’t use it if it wasn’t adopted everywhere, but I do like working with it.
We’ve done some Rust pocs but it never really got much traction and nobody really likes it. + I don’t think I’ve ever seen a single Rust job in my area of the world. C/C++ places aren’t adopting it, they are choosing Zig. That is if they’re going away from C/C++ at all.
I’m fairly confident that PHP, Python, JS/TS, Java and C/C++ will be what people still work on around here when I retire. Go is the only language which has managed to see some real adoption in my two decade career.
Python is the least fun language currently in use at any scale. Pretty much completely down to the lack of a coherent tool chain. When JS has better package management than you then you know you have a massive problem.
Microsoft probably added these features to push the language into new niches (like improving the story around Unity and going after Arduino/IoT). But it's of little practical appeal to their established base.
Not sure about that. Maybe there are? If you do web or mobile apps, C# would be an excellent choice. Go would be also an excellent choice for web.
For AI I wouldn't use C#. Even though it has excellent ML libraries, most research and popular stuff is done using Python and pytorch, so that's what I would chose.
For very low level, I'd take C or Zig. But I don't know many startups who are into very low level stuff.
>Everyone working in enterprise software development has known about the power of this language for well over a decade.
What is an enterprise? Is Google not an enterprise? Is Apple not an enterprise? Is Facebook not an enterprise? What about Netflix, Uber and any other big tech company? Weren't all enterprises start-ups at the beginning?
Does enterprise mean boring old company established long before the invention of Internet, which does old boring stuff, employs old boring people and use old boring languages? I imagine a grandpa with a long white beard staring at some CRTs with Cobol code and SAP Hana.
But I wouldn't say their choice of C# is due to them being old and boring. If it was that, they'd use Java (as many do). In my eyes choosing C# signals to me that you do want good technology (again, you could have gone with Java), but want that technology to be predictable and boring. A decent rate of improvement with minimal disruption, and the ability to solve a lot of issues with money instead of hiring (lots of professionally maintained paid libraries in the ecosystem).
And don’t bring up mono, etc. it was a dumpster fire then and it’s only recently gotten better. It tough for any tech to shed a very long negative legacy.
GUI libraries might have some potential for improvement but I would reach for C# for any task that didnt strictly require a different language.
Effective at what?
Want GC lang with lots of libraries? Use Java.
Want GC free lang with safety? Use Rust.
Otherwise just use C. Or C++.
For me C# lies in this awkward spot. Because of past decisions it will never have quite the ecosystem of Java. And because GC -free and GC libraries mix as well as water and oil, you get somewhat of a library ecosystem bifurcation. Granted GC-less libraries are almost non-existent.
Since we discuss C# here, it is a good jack of all trades language where you can do almost anything, with decent performance, low boilerplate. It's easy to read, easy to learn and you have libraries for everything you need, excellent documentation and plenty of tutorials and examples. A great thing is that for every task and domain there is a good library or framework that most developers use, so you don't have to ask yourself what to use and how and you find a lot of documentation, tutorials and help for everything.
Java it's a bit more boiler plate-y, had a bit less features and ease of use and had many libraries and frameworks that did the same thing. Had Java been better, Kotlin wouldn't need to be invented.
>Want GC lang with lots of libraries? Use Java. Want a fast to develop and easy to use language? Just use C#.
>Want GC free lang with safety? Use Rust. Want a language which you can use for almost eveything? Web front-end, web backend, services, microcontrollers, games, desktop and mobile? Use C#.
>Otherwise just use C. Or C++. Or whatever works for you. Whatever you like, find fun and makes you a productive and happy developer. There is nothing wrong in using C or C++. Or Python. Or Haskell.
Maybe slightly. But the difference is too marginal to change languages over.
> had many libraries and frameworks that did the same thing
Maybe, but it also has many more libraries doing the one obscure thing that you need for your domain.
In a vacuum, C# is a very good language, probably better than Java (as it should be given that it was able to learn some lessons from early Java). But in the wider world of programming languages they really are extremely close to each other, they're suitable for exactly the same problems, and Java has a substantially greater mass of libraries/tooling and probably always will do.
That's basically modern-day Java, with Lombok and other tidbits. Furthermore, if I recall correctly, Java has better performance on web benchmarks than C#.
> Had Java been better, Kotlin wouldn't need to be invented.
Kotlin was invented to make a sugary version of Java, and thus drive more JetBrains sales. It got popular because Oracle got litigious. As someone who's been on the Java train for almost two decades, what usually happens, if any JVM Lang becomes too popular, Java has the tendency to reintegrate its features into itself.
> Whatever you like, find fun and makes you a productive and happy developer. There is nothing wrong in using C or C++. Or Python. Or Haskell.
Sure, assuming it fits the domain. Like, don't use Python for kernel dev or Java for some obscure ML/AI when you could use Python.
I wouldn't call Lombok "modern", more like "a terrifyingly hacky way to tackle limitations in the language despite the unwillingness to make the language friendlier" and a far cry from what source generators can do in C#
But even, if you account for that, the records in Java do most of what Lombok used to do - make class externally immutable, add default `toString`, `equals` and `hashCode` implementations, allow read-only access to fields.
> what source generators can do in C#
Having had the displeasure of developing source generators in C# (in Rider), what they do is make code impossible to debug while working on it. On top of relying on an ancient version of netstandard.
I cannot emphasize enough how eldritch working on them is. While developing, whatever change you write isn't reflected when you inspect codegen code, and caching can keep old code beyond even post re-compilation unless you restart the build server, or something.
So whenever you try to debug your codegen libs, you toss a coin:
- heads it shows correct code
- tails it's showing previously iteration of code gen code, but the new code is in, so the debugger will at some point get confused
- medusae it's showing previous iteration of code gen code, but new code hasn't been propagated, and you need to do some arcane rituals to make it work.
Hell, even as a user of codegen libs, updating codegen libs caused miscompilation because it was still caching the previous codegen version.
They require 2.0, which is the only version that is actually useful, since it supports .NET Framework 4.x.v
As a fan of Records, this is a punch to the gut.
The ecosystem is years and years away from using records. Almost every huge monolith decade+ project is still on Java 8, those who moved to something new still can't be liberal with them, because oh look, none of the serialize/deserialize libs can work with them because everything, to this day, abuses reflection for generating objects like a giant fucking hack it is.
Apology for the rant, but I migrated a big project to 21 early this year, am in the middle of migrating another 1M+ line codebase to 21, and the sorry state of records is such a sad thing to witness.
I give a decade before records are anything but 'a fancy feature'.
With that said - lombok is not needed at any form there either, use a c-tor with fields and make the public final. If you have too many fields in a class, it's likely a good idea to split it regardless.
In all cases dumb getter/setters are just public fields but taking more meta space (and larger byte code, the latter has some consideration when it comes to inlining)
Also, if I had 1M LOC and my serialization/communication libraries didn't support whatever I've picked - I'd patch the libraries to support it.
And I'm saying that even after writing the most of the first project (closing in on 100kLOC now) in 21, I still can't have records where the make the most sense (service boundaries) because libs and larger ecosystem don't support them.
> Also, if I had 1M LOC and my serialization/communication libraries didn't support whatever I've picked - I'd patch the libraries to support it.
1MLOC in java land is.. not unusual. And if you're talking about patching libs like jackson/jaxb/whatever, my good person, you truly underestimate how much actual work people have (where Java upgrade is a distant afterthought, I only did it because I wanted to scratch the itch and see how far I could push processes in my org), or how much impact that might have for a drive-by contribution. Updating such core ecosystem libs in java is no small feat. They are used absolutely everywhere, and even tiny changes require big testing. There is a reason you find apache libs in every single project, because they have matured over past couple of decades without such drastic rug-pull of a change.
Also I'd actively remove all apache commons as well. Even in Java8 most of the functionality is redundant.
With all that I meant it should not be really underestimation.
I am part of the dark matter, although self-initiated java upgrades already put me on the right side of bell-curve.
> Also I'd actively remove all apache commons as well. Even in Java8 most of the functionality is redundant.
I used to think that. Then I had to decompress zip files in memory and selectively process the children. Of course Java has the functionality covered in stdlib, but they require so much boilerplate, and commons-compress was such a pleasure that I was done in 10 minutes. The same goes for other apache libs too.
OTOH, I wholeheartedly agree about Lombok being unjustified curse.
> web benchmarks
https://www.techempower.com/benchmarks/#hw=ph&test=composite...
The Tech Empower benchmarks do seem to reflect general state of Java Web Framework ecosystem with Vert.x being they hyper fast web framework and Spring being way slower.
If you take the standard template for any of these frameworks (both Java and C# and any other language) and you add authentication etc, the real performance will be 5-10% of the numbers reported in those benchmarks. Have a look through some of the weirdness in the implementations it's wild (and sometimes educational). The .NET team especially has done stuff specifically to get faster on those benchmarks.
Could you give me a pointer or two? I wondered about that myself, especially considering the massive improvement from "old" .NET to the core/kestrel based solutions - but a quick browsing a while ago mostly left me astonished how...well, for lack of a better word, banal most of the code was.
Agreed though, lack of all kinds of layers like auth, orm etc. are sadly a drawback of these kinds of benchmarks, if understandable - it would make comparability even trickier and has the danger of the comparison matrix of systems/frameworks/libraries exploding in size. But yeah, would be nice datapoints to have. :)
The custom BufferWriter stuff is pretty neat though, although also not really something most people will reach for. And there is more, like the caching of StringBuilders etc.
But it also doesn't use the actual HTTP server to build headers, but they just dump a string into the socket [2], feels a bit unrealistic to me. In general the BenchmarkApplication class [3] is full of non-standard stuff that you'd normally let the framework handle.
[1] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast... [2] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast... [3] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
Puh, yeah, I see what you mean, much the stuff in [2] and [3] is rather...bespoke, especially compared to the minimal and mvc targets. Not really what I'd consider "realistic" as per the benchmark's definition.
But TBH, I wouldn't consider [1] gross, on the contrary - simple, fast, lightweight Razor templating without other MVC (or other external) dependencies isn't that unusual a use case and something I've often thought ASP.NET Core was missing (even Razor Pages feel like overkill if you just want to quickly generate some dynamic HTML).
.NET is perfectly capable of standing on its own, and if there are specific areas that need improvement - this should serve as a push to further improve DB driver implementations and make ASP.NET Core more robust against various feature configurations. It is already much, much faster than Spring which is a good start, but it could be pushed further.
I'd like to note that neither Go nor Java are viable for high-performance programming in a way that C# is. Neither gives you the required low-level access, performance oriented APIs, ability to use zero-cost abstractions and platform control you get to have in .NET. You can get far with both, but not C++/Rust-far the way you can with C#.
Yeah, except if you are working on Web servers the quality of the framework and its supporting libraries is much more important than what code could theoretically achieve. What is the point of being able to 200 mph when you only ever drive up to 30mph.
> Neither gives you the required low-level access, performance oriented APIs, ability to use zero-cost abstractions.
Java is working on high performance abstractions, see Vector API (Simd) and project Valhalla (custom primitive types).
Sure C# has a theoretical leg up (for which it paid dearly by causing backwards incompatibility with reified generics) but most of the libraries don't use low-level access or SIMD optimizations or what not.
If you meant BenchmarksGame, then it's the other way around - Java is most competitive where it relies heavily on GC[0], and loses in other areas which require capability to write a low-level implementation[1] that C# provides.
The only places where there are C calls are pidigts[2] and regex-redux[3] benchmarks, in both of which Java submissions have to import pre-generated or pre-made bindings to GMP and PCRE2 respectively. As do all other languages, with varying degrees of "preparation".
[0]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[1]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[2]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[3]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Even if you prohibit PCRE2, the .NET submissions using out-of-box Regex engine end up being about 4 times faster than Java.
Surprisingly, even though .NET's BigInteger is known for its inefficiency, it ends up being more memory efficient and marginally faster at pidigits than a Java submission that does not use GMP. The implementations are not line-by-line equivalent so may not be perfectly representative of performance of each BigInt implementation.
My point being - if you look at the submissions closer, the data gives much clearer picture and only supports the argument that C# is a very usable language for solving the tasks one would usually reach for C, C++ or Rust instead.
Sure looks like it's written in Java!
Look at all the programming language implementations that provide big integers by calling out to GMP. Why would it be "cheating" when available to all and done openly? Libraries matter.
2 of 10 (pidigits and regex-redux) allow use of widely available third party libraries — GMP, PCRE, RE2 — because there were language implementations that simply wrapped those libraries.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
It’s not that easy. I assume other programs hide the use in macros and libraries, in ways far beyond my simple understanding.
But it’s not that easy.
Filters some but maybe not all and maybe filters some wrongly.
Where there are few enough programs that readers should check that the programs they compare seem appropriate for their purpose.
Why? Did you mean both use intrinsics or both don't?
> Sometimes, you just want to see
As-it-says, look for more-secs less-gz-source-code -- probably less optimised.
Look at all the programming language implementations that provide big integers by calling out to GMP. Why would it be "cheating" when available to all and done openly? Libraries matter.
>Most the Debian benchmarks for C# are cheaty too.<
Just name-calling.
If you don't think it's appropriate to compare the pi-digits and regex-redux programs, simply ignore them and compare the other 8!
Lombok is exceptionally backwards. You don't need getters/setters; and you should know how to try hashCode (and equals).
...and records exist
C# is better designed lang, has really strong tooling and ecosystem and well designed std lib