Platforms, especially compilers and runtimes, need to be absolutely strict in enforcing semantic restrictions so as to preserve optimization opportunities for the future.
The idea is that because not much code actually needs to mutate finals (and even if it does, that operation is already limited today to classes in the code's own modules or ones explicitly "open" to it), the application will need to grant a permission to a module that wants to mutate finals, similar to how we've recently done things with native calls and unsafe memory access.
E.g. Effective Java is a requirement inside Google, so even public GDrive APIs have final classes. External APIs is exactly the thing you'd want to mock.
Of course you get code bloat defining interfaces for everything you intend to implement once, and you have to enforce these rules, but this is something that could be made easier. Not in Java, but imagine a language where:
- Concrete classes can only be used in new, or in some platform provided DI container.
- Methods can only accept interface types and return interface types.
- Fields are private only, all public/protected is via properties (or getters/setters, it just has to be declarable in an interface)
- You have a ".interface" syntax (akin to ".class" but for types) that refers to the public members of a class without tying you to the concrete class itself. You can use this as a shorthand instead of declaring separate interfaces for everything.
Eg.
```
final class GDrive { ... }
public Download file(GDrive.interface drive) { ... }
class MockDrive implements GDrive.interface { ... }
```
The closest I can think of is a hypothetical typed variant of NewSpeak, but maybe something like this exists already?
Interfaces with one implementation are terrible. They just clutter everything and make the code navigation a pain, so it's good that people are avoiding them.
Perhaps a special "test-only" mode that allows to patch finals is a better idea.
I mean, using this logic, every single function can be hidden behind an interface. Even the sole implementation of the interface can be hidden behind a yet another interface.
If there's just one implementation, then the interface is not necessary!
- Very broad, unspecific contract that may even obscure the methods purpose
- You cannot modify the contract without modifying the class AND vice versa
- Shrinking a contract (taking away elements) is far harder and more likely to cause breakages in other code than growing a contract
- Mocks become more cumbersome because the contract is so broad
- Changes to the concrete class cause ripple effects in code that doesn't care about the change
And this basically never happens, but you still have to carry that extra overhead of interfaces.
Java also supports private/public method visibility, and this can be used to clearly show the contract. No need for interfaces.
If you 100% sure the one implementation will never change, then I'd say you're right. But it requires future-telling.
They intend to write a test double which implements the interface. But then they don't get around to writing tests?
https://github.com/google/auto/blob/main/value/userguide/ind...
I can no longer recall exactly what Bloch said, I may have to search through some of by old writing to find it, but at one point he admitted he didn’t really understand type theory when he designed the collections API. And while I appreciate the honesty, and the reason (he was trying to illustrate that this stuff is still too hard if “even he” didn’t get it), I think it paints him rather worse.
But I already knew that about him from working with that code for years and understanding LSP, which he clearly did not.
I don’t know why they thought he should be the one writing about how to use Java effectively when he was materially responsible for it being harder to use, but I’m not going to give him any money to reward him. And there are other places to get the same education. “Refactoring” should be a cornerstone of every education, for much the same reason learning to fall without hurting yourself is the first thing some martial arts teach you. Learn to clean up before you learn to make messes.
He said at one point that he had thought of a different way to decompose the interfaces for collections that had less need for variance, with read and write separated, but he thought there were too many interfaces and they would confuse people. But when I tried the same experiment (I spent years thinking about writing my own language)… the thing is when you’re only consuming a collection, a lot of the types have the same semantics, so they don’t need separate read interfaces, and the variance declarations are much simpler. It’s only when you manipulate them that you run into trouble with Liskov, and things that are structurally similar have different contracts. The difference in type count to achieve parity with Collections was maybe 20% more, not double. So to this day I don’t know what he’s talking about.
Most APIs should only consume collections from callers, so friction against mutation in your interface is actually a good thing.
So Josh Bloch opted against separate read/write interfaces.
> the thing is when you’re only consuming a collection, a lot of the types have the same semantics, so they don’t need separate read interfaces, and the variance declarations are much simpler.
And you opted against separate read/write interfaces.
> the thing is when you’re only consuming a collection,
> a lot of the types have the same semantics, so they
> don’t need separate read interfaces
> Most APIs should only consume collections from callers
I'm having trouble understanding what you mean by "consuming a collection." Can you expand?Normally a JDK 21 would let you get private/final reflective access to your own “module” but not to the stdlib modules but so many libraries want private access to stdlib objects such as all the various date and time objects.
What ones have you ran into? Jackson doesn’t even require you to —add-opens on anything.
> And mocking tools have to resort to bytecode manipulation to mock the final classes
Well which is it? Presumably you use said mocking tool anyway, so it's not your effort that's being expended.
"Final all the things" really doesn't go far enough. There is little point substituting a mutable hashmap for a "final" mutable hashmap, when the actual solution is for the standard library to ship proper immutable collection classes.
In any case, I prefer to avoid mockito anyway, so it's a non-issue for me. Just do plain ol' dependency injection by passing in dependencies into constructors.
And I’ve never forgiven him for it
Google uses mocks and fakes implementations of interfaces, and provides dependency injection frameworks for managing these (Guice and Dagger).
It was garbage.
I don’t recall doing it for domain objects though.
Mocking final classes is a blunder on its own. Most classes should be package private not public, so being final would have close to zero relevance. Personally I do not use mocking tools/frameworks at all.
OTOH, there is very little benefit of having final classes performance wise. Java performs CHA (class hierarchy analysis), anyways.
https://docs.oracle.com/javase/specs/jls/se7/html/jls-17.htm...
given this it's not surprising others thought it was acceptable also
The main issue is safety cause you might modify something that isn’t modifiable and cause a SEGV and that is precisely the concern access modifiers are meant to address.
e.g. SecurityManager for applets will not let you setAccessible(true) on private fields of system classes
I do wish that I couldn’t have done so, shrug, business needs
When telling those that it doesn't work, and that it can not work without violating the semantics of the JVM, they will wave their hand and say "look, it does work here". And it looks like, yes, if the stars align in that specific constellation, it may work.
My concern, however, is about the cost of doing this. Say I have an easy way to call my Kotlin library from Swift in a mobile app, doesn't it mean that now my iOS app will load some kind of JVM (I don't know what would run on iOS)? Similarly, if I could call Swift from an Android app, wouldn't it load some kind of Swift runtime? It all brings overhead, right?
I guess I fear that someday, developers will routinely depend on e.g. a Swift library, that will depend on a Kotlin library (loading some JVM), that will itself use JNI to call some C++. Just like with modern package managers, programs quickly end up having 100+ transitive dependencies (even with just a few direct dependencies) just because it was "too easy" for the developer not to care.
This article about the "stack allocation" misnomer in Java in particular is one of my favorites: https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacemen.... What the JVM really does is escape analysis + scalar replacement.
• Bottomless resource of developers with Java experience
• Vast array of existing libraries many of which are enterprise focused
• Managing very large codebases with many contributors is straightforward
• Standard VM that's very solid (decades of development), reasonably fast, and supported on essentially all platforms.
It doesn't have quite the stranglehold (even in Enterprise) that it had in perhaps the early 2000s and it's the archetypical "blub" language, but it's a perfectly reasonable language to choose if you're expecting Enterprise scale and pure performance is less valuable to you than scaling out with large numbers of developers.
I like Rust, but it's Java that puts bread on my table.
> Bottomless resources of developers with Java experience.
With Java experience, but what fraction have a systems outlook? What fraction have an experience with other languages to ensure that the code they write is simple and understandable and direct? My own experience is that too many come out addled by Enterprise Java idioms, and when you actually write some code in Erlang or Go you realize systems aren't as complicated as they have been made out to be.
> Managing very large codebases ...
I wonder if this is self-fulfilling. My theory is that these codebases are huge because their designs are enterprisey. The primary drivers of complexity are indirection: factories, dependency injection, microservices, these are all part of the same malaise.
It depends what you mean by systems outlook but JVM based code is pretty common (to the point I’d say ubiquitous) in large distributed systems.
In open source it’s much the same. Many of the large Apache projects are in JVM languages, for example.
> The primary drivers of complexity are indirection: factories, dependency injection, microservices, these are all part of the same malaise
The indirection in Java does drive me crazy. But dependency injection is a problem to solve in every language and libraries that can do code generation at compile time like Dagger2 make this predictable, debuggable, and fairly easy to reason about on the JVM.
Microservices are, in my opinion, more of a business organization solution than one tied to any specific language. If you haven’t read Steve Yegge’s blog post about Amazon vs Google I think it’s good reading on why/when SoA is a good idea.
1. Modern frameworks and AI assistance can help ramping up a decent backend in days. Solo tech co-founder needs to know only Java or Kotlin and some frontend stack to build MVP quickly and will spend such amount of time on non-coding tasks where language features will be irrelevant. Swift can be the second language if you go mobile-native.
2. Scaling isn’t the problem you are going to have for quite a while. It is quite likely, that problem of scaling the team will come first and any performance bottlenecks will be noticeable much later. Java is good for large teams.
That said, from business perspective, if you want larger talent pool, fast delivery cycles and something that may remain as your core stack in the long term - Java or Kotlin is probably the best choice. If you want fancy tech as a perk to attract certain cohort of developers or you have that rare business case, you can choose Go or Rust. Python is popular in academia and bootcamps, but TBH I struggle to see the business value of it for generic backends.
I wouldn’t dismiss the JVM as a whole, it is a marvel of engineering and is evolving quickly nowadays (see loom, panama, leyden, etc…).
Java is better than Go on every count, and almost all of your cases are 90% done by Java, so it's quite clearly a very good choice for almost everything.
I'm not knocking Go or Python - if those are your preferred tools, they're more than adequate. Java, however, isn't nearly as irrelevant as you may perceive.
No one is going to rewrite the JVM, even though there are several implementations, all of them are a mix of C, C++, Assembly and Java, zero Kotlin and Scala.
Yet as usual there is this little island of Kotlin or Scala ecosystem, with their own replacement of everything, and continuous talks how it is possible that the platform that makes their existence possible hasn't been rewriten into them.
Typescript and Clojure folks are traditionally more welcoming of the platform, they rather appreciate the symbiotic relationship with the host, and much more fun to hang around with.
Is this posturing? Do you feel cool? Why did you come here and bloviate over something as silly as a language choice?
I’ve spent years writing Java and later Scala, in academia and later production. I’ve always followed to see how the JVM and the language/ecosystem has progressed. And now I don’t use it at all. Is it really that odd to take a temperature on a site filled with other tech folks? I don’t understand why you took it so negatively and use words like bloviate, or attack me as just posturing to look cool (how does one look cool on a geeky Internet forum?). One of the HN tenants is to “converse curiously,” which is exactly my mindset when I wrote my comment. And if you look at the other replies, it seems others took it that way as well with healthy discussion.
I like Java (but I love Kotlin), and it seems like work on the JVM is more active than ever. I can understand your preferences, but what I observe e.g. with Desktop apps is that people use Javascript and embed a whole browser with it (e.g. ElectronJS). I would always prefer a JVM desktop app. Also with modern UI frameworks (including e.g. Compose), I am really hoping that the JVM will get a boost for Desktop apps.
It's really tiring to see these "Oh this language sucks" posts under articles that discuss details and techniques in languages - it added nothing useful to the conversation, especially since it was framed as a personal preference. Who cares that you don't care about Java?
It was framed as a personal factoid (used to use Java, now don’t), followed up by a question for everyone else: “Am I the only one?”And that’s why there’s healthy discussion to my comment if you look at the replies. I think it added quite a lot if you read it all. All of these quirks add up to a meta comment on the language.
I believe that it's enough to not upvote a message if you find it irrelevant. The upvoted messages will stay at the top. It is not completely off-topic: the people who will read the featured article have knowledge about Java, after all.
My concern is just that downvoting is fairly aggressive. You don't need to be massively downvoted many times to effectively end up being silenced (if you are moderated 2-3 times while you wrote a polite, genuine question, chances are that you won't come back). By aggressively downvoting everything that we don't find particularly relevant, I feel like it just encourages bubbles. "We are a group of Java enthusiast, just don't come talk to us if you are not a Java enthusiast yourself. Find a group of people who has the same preferences as you do instead".
I have the upvote counters hidden, because I don't want to see some misguided individual to influence what should be important for me, and what shouldn't. I make my own filtering choices.
I wish that one day the internet will realize that upvote counters are more harmful than they are useful, just as it realized for downvotes. Upvotes in general promote bubbles.
I think anything JVM (be it Java, Kotlin or Scala) would be very good there. A lot better than ElectronJS.
Probably not. Java had stagnated for quite a while, entirely missing the lightweight threading and/or async/await revolution of the last decade. The JVM ergonomics also just sucks, a lot of apps _still_ have to use -Xmx switches to allocate the RAM, as if we're still using a freaking Macintosh System 6!
On the other hand, it's a very mature ecosystem with plenty of established battle-tested libraries.
Is it not you who stagnated a bit?..
async/await is not really a revolution, so much as a bandaid bringing a modicum of parallelism to certain programming languages that don't have a good threading model.
Xmx is mostly a thing if you have very small RAM, or some sort of grievously misconfigured container setup. By default it grow up to 25% of the system RAM, which is a relatively sane default.
Well, yes. It was released as a part of JDK 21 a year ago. So far, the adoption has been spotty. They are also implemented not in the best possible way.
> Xmx is mostly a thing if you have very small RAM, or some sort of grievously misconfigured container setup. By default it grow up to 25% of the system RAM, which is a relatively sane default.
Other more sane runtimes (like Go) do not even have developers care about the heap sizing. It just works.
IIRC .NET just sets it to 75% of available memory.
Out of all three Go one is the least configurable. .NET GC is host-memory aware and adjusts heap size automatically. It does not need Xmx as it targets to keep host memory pressure low but the hard limit really is only available memory unless you override it with configuration.
It has been further improved as of recently to dynamically scale heap size based on allocation rate, GC % time and throughput targets to further reduce sustained heap size.
They deliberately took the longer route, aiming to integrate lightweight threads in a way that doesn't force developers to change their existing programming model. No need for callbacks, futures, coroutines, async/await, whatever. This required a massive effort under the hood and rework to many core APIs. Even code compiled with decade old Java versions can run on virtual threads and benefit, without any refactoring or recompilation.
> ...and/or async/await revolution of the last decade
async/await is largely syntactic sugar. Java has had the core building blocks for asynchronous programming for years, with CompletableFuture (2014, replacing the less flexible Future introduced in 2004) and NIO.2 (2011, building on the original NIO from 2002) for non-blocking I/O, along with numerous mature libraries that have been developed on top of them over time.
1. https://openjdk.org/jeps/8329758
2. https://m.youtube.com/watch?v=wcENUyuzMNM&embeds_referring_e...