https://shipilev.net/jvm/anatomy-quarks/17-trust-nonstatic-f... is a damned shame. User code misses out on an important optimization available only to system-provided classes because certain frameworks have abused JNI and reflection to mutate final fields, which by all rights should be immutable.
Platforms, especially compilers and runtimes, need to be absolutely strict in enforcing semantic restrictions so as to preserve optimization opportunities for the future.
As part of our "integrity by default" strategy [1] we're changing that. There will be a JEP about it soon.
The idea is that because not much code actually needs to mutate finals (and even if it does, that operation is already limited today to classes in the code's own modules or ones explicitly "open" to it), the application will need to grant a permission to a module that wants to mutate finals, similar to how we've recently done things with native calls and unsafe memory access.
I wonder if it thanks to some people blindly following Effective Java book that made a sin by saying "final all the things". So now we cannot easily mock final classes in tests. And mocking tools have to resort to bytecode manipulation to mock the final classes.
E.g. Effective Java is a requirement inside Google, so even public GDrive APIs have final classes. External APIs is exactly the thing you'd want to mock.
I would say less overuse of final, more underuse of interfaces. If everything takes/returns/stores values by interfaces (excluding data containers with no behavior) then you don't need to "jailbreak" any class to mock it.
Of course you get code bloat defining interfaces for everything you intend to implement once, and you have to enforce these rules, but this is something that could be made easier. Not in Java, but imagine a language where:
- Concrete classes can only be used in new, or in some platform provided DI container.
- Methods can only accept interface types and return interface types.
- Fields are private only, all public/protected is via properties (or getters/setters, it just has to be declarable in an interface)
- You have a ".interface" syntax (akin to ".class" but for types) that refers to the public members of a class without tying you to the concrete class itself. You can use this as a shorthand instead of declaring separate interfaces for everything.
Eg.
```
final class GDrive { ... }
public Download file(GDrive.interface drive) { ... }
class MockDrive implements GDrive.interface { ... }
```
The closest I can think of is a hypothetical typed variant of NewSpeak, but maybe something like this exists already?
> I would say less overuse of final, more underuse of interfaces.
Interfaces with one implementation are terrible. They just clutter everything and make the code navigation a pain, so it's good that people are avoiding them.
Perhaps a special "test-only" mode that allows to patch finals is a better idea.
Custom classloaders and Java agents allow to modify bytecode before it loads into JVM, so it's possible to remove `final`, modify visibility scope and perform basically anything.
I always wonder what is the motivation when people do interfaces with just one implementation.
I mean, using this logic, every single function can be hidden behind an interface. Even the sole implementation of the interface can be hidden behind a yet another interface.
If there's just one implementation, then the interface is not necessary!
An interface, esp. when returned from a method, is the best way the define a limited contract. If you return the concrete class, you have the following potential issues:
- Very broad, unspecific contract that may even obscure the methods purpose
- You cannot modify the contract without modifying the class AND vice versa
- Shrinking a contract (taking away elements) is far harder and more likely to cause breakages in other code than growing a contract
- Mocks become more cumbersome because the contract is so broad
- Changes to the concrete class cause ripple effects in code that doesn't care about the change
The problem is that all the items you've listed are basically "just in case we'll need to do something later".
And this basically never happens, but you still have to carry that extra overhead of interfaces.
Java also supports private/public method visibility, and this can be used to clearly show the contract. No need for interfaces.
I think you've navigated away from the scope of my remark; I've specifically asked what's the point of using interfaces when there's just one implementation, not "what is the point of interfaces" in general.
I think parent means that the one implementation may be changed in the future, so everything applies.
If you 100% sure the one implementation will never change, then I'd say you're right. But it requires future-telling.
One motivation is to separate interface and implementation. Using interface allows one to easily observe available methods in a one place. With class, one must use IDEs to filter out hundreds irrelevant lines just to find out class contract. If you ever used Delphi or C++, it provides much better experience by clearly separating class interface and class implementation.
> I always wonder what is the motivation when people do interfaces with just one implementation.
They intend to write a test double which implements the interface. But then they don't get around to writing tests?
You just invented Java AutoValue and Kotlin Data Classes.
https://github.com/google/auto/blob/main/value/userguide/ind...
I do not like Bloch, and see him as the architect of some of Java’s woes. Generics didn’t have to be so stupid. Gilad Bracha (who did a lot of the work on generics) quit the moment they were done to go try something very different - gradual typing. I hope he’s keeping an eye on what Elixir is trying, because Set Theoretic Typing has the potential to be big, and it can be applied gradually.
I can no longer recall exactly what Bloch said, I may have to search through some of by old writing to find it, but at one point he admitted he didn’t really understand type theory when he designed the collections API. And while I appreciate the honesty, and the reason (he was trying to illustrate that this stuff is still too hard if “even he” didn’t get it), I think it paints him rather worse.
But I already knew that about him from working with that code for years and understanding LSP, which he clearly did not.
I don’t know why they thought he should be the one writing about how to use Java effectively when he was materially responsible for it being harder to use, but I’m not going to give him any money to reward him. And there are other places to get the same education. “Refactoring” should be a cornerstone of every education, for much the same reason learning to fall without hurting yourself is the first thing some martial arts teach you. Learn to clean up before you learn to make messes.
He said at one point that he had thought of a different way to decompose the interfaces for collections that had less need for variance, with read and write separated, but he thought there were too many interfaces and they would confuse people. But when I tried the same experiment (I spent years thinking about writing my own language)… the thing is when you’re only consuming a collection, a lot of the types have the same semantics, so they don’t need separate read interfaces, and the variance declarations are much simpler. It’s only when you manipulate them that you run into trouble with Liskov, and things that are structurally similar have different contracts. The difference in type count to achieve parity with Collections was maybe 20% more, not double. So to this day I don’t know what he’s talking about.
Most APIs should only consume collections from callers, so friction against mutation in your interface is actually a good thing.
> he had thought of a different way to decompose the interfaces for collections that had less need for variance, with read and write separated, but he thought there were too many interfaces and they would confuse people.
So Josh Bloch opted against separate read/write interfaces.
> the thing is when you’re only consuming a collection, a lot of the types have the same semantics, so they don’t need separate read interfaces, and the variance declarations are much simpler.
And you opted against separate read/write interfaces.
> the thing is when you’re only consuming a collection,
> a lot of the types have the same semantics, so they
> don’t need separate read interfaces
> Most APIs should only consume collections from callers
I'm having trouble understanding what you mean by "consuming a collection." Can you expand?Read-only use of an externally provided collection, presumably.
The trouble is final is not final in Java, private is not private. I mean you can lock it down but you usually won’t. With reflective access, breaking the rules is one method call away. Common Java libraries such as meta factories (Spring, Guice), serializers (Jackson, GSON) and test frameworks regularly cheated in the JDK 8 era and a lot of us are running with protections for the stdlib turned off.
Normally a JDK 21 would let you get private/final reflective access to your own “module” but not to the stdlib modules but so many libraries want private access to stdlib objects such as all the various date and time objects.
I haven’t really run into any libraries wanting reflective access at all. All of our applications are on Java 21, except for one that relies on 8 because it uses jdk internals.
What ones have you ran into? Jackson doesn’t even require you to —add-opens on anything.
> So now we cannot easily mock final classes in tests
> And mocking tools have to resort to bytecode manipulation to mock the final classes
Well which is it? Presumably you use said mocking tool anyway, so it's not your effort that's being expended.
"Final all the things" really doesn't go far enough. There is little point substituting a mutable hashmap for a "final" mutable hashmap, when the actual solution is for the standard library to ship proper immutable collection classes.
In any case, I prefer to avoid mockito anyway, so it's a non-issue for me. Just do plain ol' dependency injection by passing in dependencies into constructors.
As I mentioned in another reply, Josh Bloch experimented with a different type structure for the Collections API that could have yielded read only collections but he thought it was too confusing and went back to… this.
And I’ve never forgiven him for it
I would say that anything that requires monkey patching in a strong typed language is a sign of bad architecture and application design in first place.
If you are rigorous as to make classes final, you should also be rigorous to never provide a non-interface as an Application Programming Interface.
Google uses mocks and fakes implementations of interfaces, and provides dependency injection frameworks for managing these (Guice and Dagger).
I once worked with a guy who obsessively made interfaces for every java class. Even domain objects. He was extremely proud of this.
It was garbage.
Was this in Amazon? If so, it might have been me. Sorry about that. I have learnt my lesson now.
I don’t recall doing it for domain objects though.
>So now we cannot easily mock final classes in tests
Mocking final classes is a blunder on its own. Most classes should be package private not public, so being final would have close to zero relevance. Personally I do not use mocking tools/frameworks at all.
OTOH, there is very little benefit of having final classes performance wise. Java performs CHA (class hierarchy analysis), anyways.
In my experience I don’t see many people use final classes. Mostly just final fields.
The only final classes I can remember are stuff like java.lang.String, which needed to be immutable so a SecurityManager could consume them for policy decisions.
Good thing the security manager is deprecated for removal!
the System.out thing is Java itself
https://docs.oracle.com/javase/specs/jls/se7/html/jls-17.htm...
given this it's not surprising others thought it was acceptable also
Field modifiers are a semantic constraint not a security constraint. It is right and proper that you should be able to bypass them with the appropriate backflips.
The main issue is safety cause you might modify something that isn’t modifiable and cause a SEGV and that is precisely the concern access modifiers are meant to address.
they certainly were a security constraint back in the day before Java gave up on trying to use the type system for security
e.g. SecurityManager for applets will not let you setAccessible(true) on private fields of system classes
I’ll be the first to admit that I’ve written the evil three liner to “un-final”, mutate, re-final a member off in some long forgotten internal library to dodge a gnarly refactor.
I do wish that I couldn’t have done so, shrug, business needs
Yes, but you would be surprised how many people want to change static final fields for various reasons - be it testing, or other things.
When telling those that it doesn't work, and that it can not work without violating the semantics of the JVM, they will wave their hand and say "look, it does work here". And it looks like, yes, if the stars align in that specific constellation, it may work.
Also a part of why Singletons are the black sheep of the Patterns family. They’re nasty during bootstrapping and hell during functional testing.
IIRC illegal access can be locked down and be controlled in a fine grained manner with the add-opens and illegal-access flags on newer JVMs.
Tangential: Apple has a new Swift Java bridge which is pretty cool, supporting both JNI and Panama. I’ve been porting it to Android this past week.
I find the "modern" (if I can call it that) approach to cross-platform interesting: interoperability between languages makes it possible to share a library between multiple platforms when it makes sense. Until now I was exclusively doing that with C++ (possibly with a C API), but obviously C++ is never the preferred language when it is itself not necessary.
My concern, however, is about the cost of doing this. Say I have an easy way to call my Kotlin library from Swift in a mobile app, doesn't it mean that now my iOS app will load some kind of JVM (I don't know what would run on iOS)? Similarly, if I could call Swift from an Android app, wouldn't it load some kind of Swift runtime? It all brings overhead, right?
I guess I fear that someday, developers will routinely depend on e.g. a Swift library, that will depend on a Kotlin library (loading some JVM), that will itself use JNI to call some C++. Just like with modern package managers, programs quickly end up having 100+ transitive dependencies (even with just a few direct dependencies) just because it was "too easy" for the developer not to care.
Happy to see this gem shared here. I've learnt a lot about the JVM going through these.
This article about the "stack allocation" misnomer in Java in particular is one of my favorites: https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacemen.... What the JVM really does is escape analysis + scalar replacement.
I love the "size" of these posts. Kinda neat to just read through one in a few mins and maybe run the bench locally.
If you work for a few years with JVM based languages this set of articles are so interesting! I remember reading through these for the first time several years ago.
Does anyone know why the name of this series was changed from ‘JVM Anatomy Park’?
I think he renamed it when stuff first started coming out about Justin Rolland’s online behaviour.
I’ve basically forgotten about Java. It would never occur to me to start a new project in it. Am I the only one? It feels like I’d reach for python if I want fast development and flexibility, Go if I want to handle a bunch of I/O concurrency in a garbage collected way, Swift if I want a nice language that’s compiled and balanced, or Rust if I want performance and safety in a compiled language. Those are just my personal leanings. I know kotlin has made Java more ergonomic, and yet….
You're not the only one I'm sure, but sounds like you don't need it. Its major strengths are:
• Bottomless resource of developers with Java experience
• Vast array of existing libraries many of which are enterprise focused
• Managing very large codebases with many contributors is straightforward
• Standard VM that's very solid (decades of development), reasonably fast, and supported on essentially all platforms.
It doesn't have quite the stranglehold (even in Enterprise) that it had in perhaps the early 2000s and it's the archetypical "blub" language, but it's a perfectly reasonable language to choose if you're expecting Enterprise scale and pure performance is less valuable to you than scaling out with large numbers of developers.
I like Rust, but it's Java that puts bread on my table.
I fully agree with you about the solidity of the VM. The others I am not so sure about.
> Bottomless resources of developers with Java experience.
With Java experience, but what fraction have a systems outlook? What fraction have an experience with other languages to ensure that the code they write is simple and understandable and direct? My own experience is that too many come out addled by Enterprise Java idioms, and when you actually write some code in Erlang or Go you realize systems aren't as complicated as they have been made out to be.
> Managing very large codebases ...
I wonder if this is self-fulfilling. My theory is that these codebases are huge because their designs are enterprisey. The primary drivers of complexity are indirection: factories, dependency injection, microservices, these are all part of the same malaise.
> With Java experience, but what fraction have a systems outlook?
It depends what you mean by systems outlook but JVM based code is pretty common (to the point I’d say ubiquitous) in large distributed systems.
In open source it’s much the same. Many of the large Apache projects are in JVM languages, for example.
> The primary drivers of complexity are indirection: factories, dependency injection, microservices, these are all part of the same malaise
The indirection in Java does drive me crazy. But dependency injection is a problem to solve in every language and libraries that can do code generation at compile time like Dagger2 make this predictable, debuggable, and fairly easy to reason about on the JVM.
Microservices are, in my opinion, more of a business organization solution than one tied to any specific language. If you haven’t read Steve Yegge’s blog post about Amazon vs Google I think it’s good reading on why/when SoA is a good idea.
There are more Java devs out there than people living in my country. I don't think generalizing this way is helpful, there are many different styles of Java development, and developers have vastly different skill sets.
In addition to other answers: Java absolutely does have the qualities necessary for startups, so using it for new projects makes sense.
1. Modern frameworks and AI assistance can help ramping up a decent backend in days. Solo tech co-founder needs to know only Java or Kotlin and some frontend stack to build MVP quickly and will spend such amount of time on non-coding tasks where language features will be irrelevant. Swift can be the second language if you go mobile-native.
2. Scaling isn’t the problem you are going to have for quite a while. It is quite likely, that problem of scaling the team will come first and any performance bottlenecks will be noticeable much later. Java is good for large teams.
That said, from business perspective, if you want larger talent pool, fast delivery cycles and something that may remain as your core stack in the long term - Java or Kotlin is probably the best choice. If you want fancy tech as a perk to attract certain cohort of developers or you have that rare business case, you can choose Go or Rust. Python is popular in academia and bootcamps, but TBH I struggle to see the business value of it for generic backends.
There are plenty of very ergonomic languages on the JVM (for instance clojure).
I wouldn’t dismiss the JVM as a whole, it is a marvel of engineering and is evolving quickly nowadays (see loom, panama, leyden, etc…).
Flamewar-y reply to a flamewar-y comment:
Java is better than Go on every count, and almost all of your cases are 90% done by Java, so it's quite clearly a very good choice for almost everything.
I think it's not just you, but certainly not everyone. Kotlin with Java 21+ is my go-to choice for an I/O-bound service, or really any service. It's just so ergonomic, and with virtual threads the code can be as simple and efficient as Go - while also taking advantage of possibly the best and largest library ecosystem in the world.
I'm not knocking Go or Python - if those are your preferred tools, they're more than adequate. Java, however, isn't nearly as irrelevant as you may perceive.
Go is definitely not an ergonomic language.
I mean, I share your opinion personally but plenty of people find it pleasant to read and write. If it was really a bad language, it wouldn't have the adoption it does.
It's an interesting parallel development that people are complaining about the bad original language on both JS and JVM platforms and often using other languages (Kotlin, TypeScript, Clojure/ClojureScript, etc). I guess even Swift instead of Objective-C on the Apple side counts here in a way.
What I find sad in some of the guest languages communities, like Kotlin and Scala ones, is how so many happen to disdain the platform that makes their ecosystem possible in first place.
No one is going to rewrite the JVM, even though there are several implementations, all of them are a mix of C, C++, Assembly and Java, zero Kotlin and Scala.
Yet as usual there is this little island of Kotlin or Scala ecosystem, with their own replacement of everything, and continuous talks how it is possible that the platform that makes their existence possible hasn't been rewriten into them.
Typescript and Clojure folks are traditionally more welcoming of the platform, they rather appreciate the symbiotic relationship with the host, and much more fun to hang around with.
Can you easily make a Desktop app in Swift (I guess Python, Go and Rust don't fit your criteria) and distribute it to all platforms?
I think anything JVM (be it Java, Kotlin or Scala) would be very good there. A lot better than ElectronJS.
For posters like you I always wonder: why are you posting? Why did you even click on the article if you dislike Java so much?
Is this posturing? Do you feel cool? Why did you come here and bloviate over something as silly as a language choice?
Are you the reason I got downvoted? I was very surprised by that.
I’ve spent years writing Java and later Scala, in academia and later production. I’ve always followed to see how the JVM and the language/ecosystem has progressed. And now I don’t use it at all. Is it really that odd to take a temperature on a site filled with other tech folks? I don’t understand why you took it so negatively and use words like bloviate, or attack me as just posturing to look cool (how does one look cool on a geeky Internet forum?). One of the HN tenants is to “converse curiously,” which is exactly my mindset when I wrote my comment. And if you look at the other replies, it seems others took it that way as well with healthy discussion.
Your comment is generic off-topic that has only Java in common with original post. The way it is written, it gives certain vibes and doesn’t sound like genuine invitation to compare Java with other technologies. If that was your intention, maybe you should write your own post and ask there instead.
I don't understand the downvotes, I don't find your post offensive. My guess is that people don't downvote to moderate, but rather to show that they disagree. Which unfortunately kills the discussion a bit.
I like Java (but I love Kotlin), and it seems like work on the JVM is more active than ever. I can understand your preferences, but what I observe e.g. with Desktop apps is that people use Javascript and embed a whole browser with it (e.g. ElectronJS). I would always prefer a JVM desktop app. Also with modern UI frameworks (including e.g. Compose), I am really hoping that the JVM will get a boost for Desktop apps.
I absolutely downvoted to moderate - the post had nothing to do with the article on hand. It's much more appropriate for an Ask HN post.
It's really tiring to see these "Oh this language sucks" posts under articles that discuss details and techniques in languages - it added nothing useful to the conversation, especially since it was framed as a personal preference. Who cares that you don't care about Java?
But it did not say "this language sucks". To me it was kind of asking something like "is Java still relevant nowadays?", in a polite way. It actually got a few interesting answers ("Java sucks" probably wouldn't).
I believe that it's enough to not upvote a message if you find it irrelevant. The upvoted messages will stay at the top. It is not completely off-topic: the people who will read the featured article have knowledge about Java, after all.
My concern is just that downvoting is fairly aggressive. You don't need to be massively downvoted many times to effectively end up being silenced (if you are moderated 2-3 times while you wrote a polite, genuine question, chances are that you won't come back). By aggressively downvoting everything that we don't find particularly relevant, I feel like it just encourages bubbles. "We are a group of Java enthusiast, just don't come talk to us if you are not a Java enthusiast yourself. Find a group of people who has the same preferences as you do instead".
I would like to note the irony: my comment above is being downvoted as well :D. It is starting to feel like the kind of toxicity I find on StackOverflow.
I also don't understand the downvotes; the original post was polite and informative, the "moderator" throwing downvotes was aggressive, and more harm was done than help.
I have the upvote counters hidden, because I don't want to see some misguided individual to influence what should be important for me, and what shouldn't. I make my own filtering choices.
I wish that one day the internet will realize that upvote counters are more harmful than they are useful, just as it realized for downvotes. Upvotes in general promote bubbles.
> Am I the only one?
Probably not. Java had stagnated for quite a while, entirely missing the lightweight threading and/or async/await revolution of the last decade. The JVM ergonomics also just sucks, a lot of apps _still_ have to use -Xmx switches to allocate the RAM, as if we're still using a freaking Macintosh System 6!
On the other hand, it's a very mature ecosystem with plenty of established battle-tested libraries.
Java is one of the couple of platforms that do have virtual threads (others being Erlang, go and Haskell), and by far the biggest among these..
Is it not you who stagnated a bit?..
I think your information is outdated. Java has had lightweight threads for several releases now. It also has type pattern matching switches, and a bunch of modern ergonomics.
async/await is not really a revolution, so much as a bandaid bringing a modicum of parallelism to certain programming languages that don't have a good threading model.
Xmx is mostly a thing if you have very small RAM, or some sort of grievously misconfigured container setup. By default it grow up to 25% of the system RAM, which is a relatively sane default.
> I think your information is outdated. Java has had lightweight threads for several releases now.
Well, yes. It was released as a part of JDK 21 a year ago. So far, the adoption has been spotty. They are also implemented not in the best possible way.
> Xmx is mostly a thing if you have very small RAM, or some sort of grievously misconfigured container setup. By default it grow up to 25% of the system RAM, which is a relatively sane default.
Other more sane runtimes (like Go) do not even have developers care about the heap sizing. It just works.
It’s valid criticism because you do need to think about it less in other runtimes, but it doesn’t always just work. There’s a reason why GOMEMLIMIT and other knobs for high allocation programs were introduced.
IIRC .NET just sets it to 75% of available memory.
> IIRC .NET just sets it to 75% of available memory.
Out of all three Go one is the least configurable. .NET GC is host-memory aware and adjusts heap size automatically. It does not need Xmx as it targets to keep host memory pressure low but the hard limit really is only available memory unless you override it with configuration.
It has been further improved as of recently to dynamically scale heap size based on allocation rate, GC % time and throughput targets to further reduce sustained heap size.
Java does the sane thing within containers, and you definitely not have to set memory settings anywhere else, unless you want some very specific behavior.
I thought this was the case but actually couldn’t find any documentation on it. The best I could find was that the vm is aware it is in a container and will correctly set the heap %’s based on the containers memory. It still looked like it was defaulting to 25%.
The async/await is not a revolution, but rather is a tool only for specific use cases. It shouldn't be used by default, as it makes the project unnecessarily complicated. If your requirements are to do heavy parallelism where everything uses I/O, then use async, but for the rest of cases? Probably not worth it.
The revolution is the need for massive concurrency. It doesn't need to be async/await, Go-like green threads are even better.
> ...entirely missing the lightweight threading...
They deliberately took the longer route, aiming to integrate lightweight threads in a way that doesn't force developers to change their existing programming model. No need for callbacks, futures, coroutines, async/await, whatever. This required a massive effort under the hood and rework to many core APIs. Even code compiled with decade old Java versions can run on virtual threads and benefit, without any refactoring or recompilation.
> ...and/or async/await revolution of the last decade
async/await is largely syntactic sugar. Java has had the core building blocks for asynchronous programming for years, with CompletableFuture (2014, replacing the less flexible Future introduced in 2004) and NIO.2 (2011, building on the original NIO from 2002) for non-blocking I/O, along with numerous mature libraries that have been developed on top of them over time.
Omg I forgot about that. So there’s no way to say just grab the memory you need and that’s that?
That’s in the works, where it adapts from 16mb to terabyte heaps. The current GCs have a max, with lazy allocation and ability to release back to the system periodically, but are not as system aware.
1. https://openjdk.org/jeps/8329758
2. https://m.youtube.com/watch?v=wcENUyuzMNM&embeds_referring_e...