I used to work heavily in this kind of system modeling (developing tools for it, and dogfooding), and still use it when high-value for figuring out or communicating an aspect of a system.
Here's a challenge, to help appreciate the nature of these: try to find an error in the diagrams.
It's usually harder than you might think, since, even when you know the notation and metamodel semantics, it's information-dense talking about a domain.
You usually have to know or learn a lot about the domain, and/or have an domain expert you can walk through it exhaustively, before you can find errors.
And an error can be whopper: a single graph edge missing, or between the wrong vertices, or with the wrong adornments can have huge implications.
For example, large amounts of work that have to be redone, or a project abandoned, or a mess that takes 10x longer than it should to write, and 10x the tech debt going forward with a bad architecture, or a fundamental security flaw.
One of the mistakes many people make is treating formalized diagrams as "marketecture" visuals, like they only need for handwaving sales presentation slides, where there's some kind of visual for every concept they want to be able to literally point to.
Nope, if you have software engineers and domain experts communicating and reasoning about your system in only the fuzzy terms of sales/exec presentations, you're really stabbing yourself in the face.
One of my more painful design mistakes happened in this sort of way when designing a system for recording inspections. I interviewed multiple inspectors and came up with a representation that was a little bit more elaborate than I would have hoped, but it at least captured all information I believed.
Then the company progressed and eventually we got to market fit and for two years the team and I were dealing with this increasingly burdensome complexity that we were not reaping any rewards of. Then one day we had enough and a colleague redesigned the system to ditch the extra complexity resulting in a much more elegant design that was easier to reason about.
That bliss continued for less than a year, until some customers asked for a particular report that we needed to generate based on a structure of the information that was now no longer present. We had to redesign it again, migrating was super painful and involved a temporary workaround that basically involved an extra branch on literally every piece of code that touched the inspection system.
In retrospect, I still don't know how I could have convinced the team that the complexity was something we needed when no customer required it for 3 years. Especially when the colleagues who took over that system from me had gained much more experience and expertise in the domain than I had since I had designed the original.
It would probably have been better if I had recorded the requirement that prompted the complexity, but had not actually implemented it as no customer had actual need for it at the time. Then we would not have had to deal with the complexity the first three years, and then evolved the product when the need arose.
This seems like a business problem more than a design issue. Systems need to evolve alongside the business they support. Starting out with a simple design and evolving it over time to something more nuanced is a feature. Your colleague was right, and you were also right; except for the part where all nuances of the ideal solution need to be present on day 1.
The clients you have on day one are often very different from the ones you’ll have a few years in. Even if they’re the same organisations, their business, expectations, and tolerance for complexity likely have changed. And the volume of historical data can also be a factor.
A pattern I’ve seen repeatedly in practice: 1. A new system that addresses an urgent need feels refreshing, especially if it’s simple. 2. Over time (1, 3, 10 years? depending on industry), edge cases and gaps start appearing. Workarounds begin to pile up for scenarios the original system wasn’t built to handle. 3. Existing customers start expecting these workarounds to be replaced with proper solutions. Meanwhile, new customers (no longer the early adopter type) have less patience for rough edges.
The result is increasing complexity. If that complexity is handled well, the business scales and can support growing product demands.
If not… I'm sure many around here have experiences where that leads (to borrow Tolstoy: “All happy families are alike; each unhappy family is unhappy in its own way.”).
At the same time a market niche may open for a competitor that uses a simpler approach; goto step 1.
The flip side, and this is key: capturing all nuances on day 1 will cause complexity issues that most businesses at this stage are not equipped to handle yet. And this is why I believe it is mostly a business problem.
They're an awful substitute for code. About the best use for them is to give people an overview of the architecture of an existing system.
Even them theyre not great coz they tend to go out of date quite quickly and theyre quite expensive to build.
As a means of software design theyre BDUF crack - theyre an incitement to bad decisions in advance of writing software that would always be much better if done retrospectively via refactoring.
> try to find an error in the diagrams.
Easy peasy, it's the 3rd blue line above the purple line /s
https://github.com/takaakit/uml-diagram-for-ddd-example-in-e...
I can't believe someone took time to generate such a thing, as if it is useful to anyone
I found this vid extremely funny as I've been in the same position sort of: "draw seven stricly perpendicular lines" asked of the dev by the sales team.
It’s a great demonstration how unrealistic it is to use it.
If it is this complicated for a demo project for the purposes of the book, in my opinion, it shows it’s completely inadequate for handling anything remotely real.
Project ongoing for five years, 10 devs on the team, each year two resigns and two joins, some good, some mediocre, can you imagine them mess this will be? And that’s not even a large project, it’s just an average but real project size.
KISS / YAGNI goes a long way.
I also think it's bad to create this many diagrams at this level of detail, or try to keep the model updated, in an active software project. I think it's important to be selective—sometimes that even means choosing not to choose.
> I can't believe someone took time to generate such a thing, as if it is useful to anyone
Thanks for saying this. UML is beyond useless, almost nobody understands it, almost nobody does it right and almost nobody uses it when they need to (because it makes no sense to them).
This directed graph is automatically generated by the plugin based on the UML structural elements (classifiers and relationships). Hope this helps.
Good UML is really simple UML.
"Then what about complex things? Can't make everything simple"
Do partial diagrams. Simplify or skip things your team already knows.
Also, great UML is no UML. Sometimes the code itself is short and clear enough, requiring no diagram (of course, not all diagrams are about code... but use case diagrams are rare these days anyway).
Also, use cases and use case diagrams are great.
Also, perfect UML is disposable. Thinking of long term diagrams that serve as documentation is a mistake.
This is why UML only belongs on a whiteboard. It makes it much harder to include unnecessary detail when you have to physically add it, and there’s hard real estate constraints.
I very much like your comment. It describes practical use of diagrams. In the past, I've included diagrams for a particularly complex state in the comment on top of my code. It certainly doesn't describe the whole system, or even the complete state of that code. And I expect that someone will delete the comment at some point. That's all fine.
Thanks!
I remembered once I drew ASCII diagrams for a particularly tricky algorithm in the top of its unit test methods. And people indeed deleted them.
Analyst: “So this part of the UML diagram is right? A fizz always belongs to a buzz?”
Domain expert: “Yes, always”
Analyst: “Any exceptions you can think of?”
Domain expert: “No, none at all.”
—-
Forward to day 1 after “delivery” of the implemented system.
Domain expert is now using the system for the first time in a real-life situation:
“It doesn’t let me save the fizz I’m creating. How does this handle a case where a fizz doesn’t yet belong to a buzz?”
Is there an equivalent of The Mom Test for talking to domain experts?
For the architectural documentation like this one, the C4 Model [0] is a much better fit than UML - primarily because it's less rigid in notation and modeling components. And in terms of tooling, I find IcePanel [1] to have the right combination of flexibility and simplicity.
The bottom layer of C4 is still basically UML, although everyone usually skips that.
I love IcePanel and would recommend everyone try it. But like all these things, it requires an almost superhuman level of commitment to get value out of it. It has built in mechanisms to keep you honest and up to date and I have found it useful both the strategic and tactical level when used right. But ultimately it’s difficult to build an engineering culture around long-term, living diagrams. The moment everything gets out of sync it might as well just be a photo of a whiteboard. I strongly suspect this sort of platform plus AI will be a great combination (I think at least one HNer is working on exactly that and I assume IcePanel too).
What happened to UML? I remember it was everywhere in enterprise computing 20 years ago but seems to have disappeared now.
Is it still around or did it go the way of SOAP, Java Applets? If not, what has replaced it?
Underneath the good ideas of UML was the idea of model driven development - that you could generate code from the models.
Good diagrams are an art form though, just like other forms of good documentation. There is a big difference between a diagram being a normative source of truth, and being a good tool.
Take this diagram as an example: https://takaakit.github.io/uml-diagram-for-ddd-example-in-ev...
* Nearly the top half is taken up by relations which aren't especially helpful, and would be better expressed on the entities themselves.
* The "Location" type looks like the primary type in the system based on nearly half the types in the system connecting with it, creating a nest of graph edges. However, this is likely a simple and immutable record type. If the entities in this diagram had attributes (e.g. like a class diagram), you could refer to it by name, and greatly simplify the cognitive load in understanding this picture.
This is the sort of diagram that gets generated from code, and is at the level of specificity needed to generate code from a diagram. It also isn't very useful for using the model to convey the system to a person - a developer might be more comfortable reading the code rather than looking at a picture generated from it.
IMHO, that split focus in UML between people and tooling greatly reduced the overall usefulness and understanding of the system. It gave UML a bad reputation. Training on MDD tools prevailed over providing a baseline of common nomenclature to enable "team cave drawing on a whiteboard", which has no commercial tooling or market other than the whiteboard and markers.
The areas where visual languages have a bit more prevalence are on the business modeling side, including things like network architecture diagrams.
It is a bit of a shame - going through a set of use cases while iteratively improving the architecture by improving a set of class, sequence and ECB diagrams is something I always found to be crazy efficient vs diving immediately down to prototype code.
Just this week I did a bunch of UML diagrams.
It is pretty much alive in big corps.
I imagine it got out favour on hipster coffee shops coding their next big idea, rewriting multiple times, instead of validating their ideas in first place.
Github supports UML diagrams in their markdown for a reason.
It's still being taught in schools AFAIK.
Had an intern a year ago whose school forced them to go the whole nine yards with UML. Our company gave them a project to build, and their school made them draw use-case diagrams, class diagrams, sequence diagrams, and activity diagrams, before the student was allowed to write any code.
This was a waste of everyone's time, and we gave the school some choice feedback which I'm sure they'll ignore.
Sequence diagrams are still used quite a lot, and for good reason, they're incredibly useful. I see simple class diagrams quite frequently as well, but without the poorly designed arrow nonsense (shaded? open? closed? argh!).
I'm kind of hazy about where the boundaries between bona fide UML and PlantUML are, but PlantUML is in pretty common use in my world - more for sequence diagrams and state diagrams than class diagrams though. Of course PlantUML has competition from things like Mermaid, but they're all much of a muchness.
My very non-scientific impression is that tools like Rational were mostly used for drawing class diagrams and ... that just turned out not to be that useful because the tooling didn't exist to round-trip changes in the code back into the diagrams meaning the diagrams lost parity with the code rather quickly. It was sort of useful in the linear design/code/deliver world, but we're a lot more iterative now.
I remember having to use Rational Rose back in the day, and I had to ask my boss why because the software crashed and the files got corrupted all of the time. If it didn’t help them write better software, why were we using it?
That said, I still find plantuml to be helpful, particularly for sequence and activity diagrams. With LLMs especially. Use dependency injection, fine granularity components, sequence and activity flows. At least for me, helps keep my mind organized. But, these days I work alone. I feel it may be too dated for the modern developer.
> With LLMs especially...
> I feel it may be too dated for the modern developer.
Sounds like you're still ahead of the curve, worry not.
I'm still too much of a curmudgeon to even give LLMs a try.
I just learned about UML earlier today when I wanted to bring in a mermaid flowchart into lucidchart. If you bring in mermaid using lucidchart, it makes it a raster image. But if you use UML markup, you can ungroup it and modify. I couldn’t get a good way to convert from mermaid to UML markup that lucidchart could handle. Ended up a dead end for me.
On a similar note I've had various annoyances with wanting to round trip diagrams from Mermaid or PlantUML into Miro and vice versa. There doesn't seem to be much out there to facilitate this, which is strange, because it feels like a very enterprisey thing to want to do.
There's probably a SaaS product in it for someone sufficiently interested in the topic... probably not at unicorn scale, but enough for a side income perhaps?
I think folks upgraded to GML [0]
I love it. Even with GML, you run into problems. Diagramming is an art more than a science. You have to know your audience (technical people, marketing types, investors, customers...)
Leaving stuff out is also important. (Example: Unless you're building a DNS related product, don't bother to include "DNS" in your diagrams. Your use of it is assumed.)
This is wonderful. Thank you.
Wait until you discover IGML (intergalactic modeling language)[1]. The entire user interface of a space game I'm working on -- including menus, buttons, sliders, even text rendering -- is literally implemented via a form of IGML. It's a very old language. Games like Asteroids and Battlezone have used it since forever.
Just my opinion, but I think that folks realised it was not much use for initial design, but could work reasonably well for documentation of existing systems.
Documenting existing system dependencies using the product which which I'm a founder:
https://schematix.com/video/depmap
Like UML deployment diagrams, but our models are interactive and can be queried.
We also have a large customer doing DDD modeling and then generating code directly from the models they build.
Yeah, it started to work for documentation when they invented that thing that prevents you from having to constantly maintain super complex diagrams in order to keep them in sync with the actual product.
IIRC it got a bit out of fashion towards the end of the 2000s along with the OOP fanatism it was closely related to.
In payments and most other industries with a lot of connections to 3rd parties and complicated transactional flows, UML diagrams (especially sequence diagrams) are used a lot, albeit mostly as a conceptual tool rather than some formal description (see for example the Stripe docs). I also see UML state chart-like approaches being used more formally in frontend (e.g. XState).
UML as a methodology seems to have disappeared, but bits of it (e.g. sequence diagrams) seem to have been absorbed by the profession at large.
I think the UML statecharts are a much more useful diagrams than sequence diagrams. It can convey a lot more of the behaviour especially if you use the entire 'Harel' formalisms.
Too bad, they aren't that popular to document behaviour, because sequence diagrams really don't convey that much.
For example timeouts are very neatly described in a harel state chart. How will one describe timeout and timeout handling in sequence diagrams.
Sequence diagrams existed prior to UML. They were extensively used in telecom: “message sequence charts.”
They were part of a telecom language called SDL. There were tools to “compile” to various languages such as C. If you followed their rules, you were supposed to be able to go back and forth between C and SDL.
I only remember this because the biggest tool provider at that time was a French company called Verilog. They preceded the HDL by a few years, I guess.
I used to work for Motorola's telecom division and we used IBM Rational Suite to generate C++ classes from the SDL. Merges (and we used very long-lived release branches) were pure hell...
IIRC sequence diagrams were invented by one of Carl Hewitt’s PhD students during Hewitt’s earlyish work on the Actor model?
Gul Agha?
I occasionally use a “UML Lite,”[0] to illustrate stuff, but I seldom work at the scale that I would consider it necessary. I generally keep my designs in my head. Makes for much faster, and higher-quality work.
[0] https://littlegreenviper.com/the-curious-case-of-the-protoco...
My theory is that Agile killed it. UML requires a lot of planning up front, and in my unfortunate experience, any sort of planning is completely antithetical to Agile.
Experiencing this right now, zero planning for a re-write of a system
I think its abandonment is related to the adoption of agile methodologies. UML was used to describe systems from day one. Agile methodologies came in, claiming that we don’t need it.
You still need to describe systems in agile.
All agile does is ask you to iterate, which means updating your design, your code, and your tests & documentation, as you learn.
Please read the agile manifesto again, and actually apply it. It does not say we don’t do design at all. Never did.
There is the manifesto, and then how almost everyone does it.
We still learned UML diagrams in school in 2019, though I have to say I haven't used it anywhere in my job
In the case of UML, I think enterprise consulting and tool sales might've already obscured much of the real value of semiformal methods, before a CS curriculum could sour students on it.
With ZIRP growth scheme startups hopefully over, and more of us having to get jobs building systems that work reliably and sustainably, we might gain a new appreciation for system modeling.
UML and waterfall/Big Requirements Up Front/analysis paralysis all ride in the same car.
Agile and scrum and iteration saw them off.
And that is how we end doing integration tests where folks in the field discover missing user stories that no one is going to implement, because there is no budget left.
UML never matched reality and this isn’t really an engineering discipline even if we pretend it is.
You can really break UML by specifying a system then changing it a number of times. That process will hurt you. Badly.
Just another data point that the "low code/no code" revolution sputtered out over a decade ago.
I still see and use a small subset of UML for more complex architecture discussions, but that's about it (i.e. service nesting, message passing, etc.).
UML was a scam really. They took a lot of very useful diagrams and created a absolute behemoth of unproven system to develop software and sold it.
The diagrams are very nice and useful. But the UML as a process, if taken literally, it a total disaster.
One of the creators of UML, Booch, has said so himself that it was taken to a point where it should never have. Interesting interview: https://youtu.be/u7WaC429YcU?feature=shared
> UML was a scam really.
Objectively and utterly wrong.
These diagrams make a lot of sense if you understand the symbols and the relationships plus aggregations. You can easily talk to them. I use these diagrams with non technical people all the time
At one point it looked like working in corporate IT meant you’d have to learn Java and UML and the magic light left the forest and the little fairies and elves that made computing magical and happy and bright died.
Fortunately UML was not the future.
It's useful to convey architectures. The problem is that UML has been abused by some vendors that implemented bad software based on "Rapid UML Development" and it soured a lot of developers. Also it is very old java OO oriented. But Sequence diagrams are still in use.
OO yes, but not Java particularly. I was using Rational tools with C++ projects well before Java was getting any visibility.
Apropos of nothing, all of the major ai models have gotten really good at turning a pdf or document into uml
Not great, mind you. 80% of the way there still saves an hour of your time tho
How about we ditch the AI slop
Have you noticed that at the bottom of every HN comment page is an ad for AI Startup School?