For the academia reward system, maybe Dijkstra was right. But if you work in tech, you quickly discover that most complexity you find around is not introduced because it sells well, but because most people are incapable of good design. Then, sure: since it sells better there is little pressure in the management to replace them with people that can design...
I don't think one should underestimate the incentives at play here though. Complexity sells not just in literal money, but in career prospects too and so on. It's really bad incentives all around in favor of complexity.
I hate moat building. I understand why it exists, but I don’t have to be happy about it.
Moat building is why we need disruptive innovators to come along every now and then and shake things up. Moat busters.
Absolutely! Curriculum driven development is a thing!
That’s a really funny term, is there some blog or something?
Complexity sells in terms of dopamine. "Look at this incredibly complicated thing I made, that I understand and you don't! Aren't I brilliant!" "You must be - I can't understand it at all."
People get emotional rewards from themselves from making something work that is at the limit of the complexity that they can handle. They often also get emotional rewards from others for making things that others can't understand. (They shouldn't, but they often do.)
People are very capable of good design but are not given time to do good design. Temporary band-aids become permanent fixtures and eventually the company is drowning in tech debt. It is a tale as old as time.
Time is one of the dimensions, but I often see (bad) designers to stick with the first idea they have (and if they are not very good, the first idea is likely very bad), then as the idea shows the weaknesses, instead of understanding what is going on, they invent more broken "fixes" complicating even more the original idea. So there is a multiplicative effect of this going on and on. This is why the 10x programmer is not a myth: it would be if we were talking about "I can do 10 times what you do" and that would be impossible if you compare a top programmer with an average one. What happens is instead that a 10x programmer just avoids design mistakes one after the other, so they find themselves doing less and less useless work that in turn complicates things more and so forth. Being a 10x coder is about design, not coding.
It’s strange I started to observe some of this, but seems like the “bad designers” have no concept of design. They’re happy to have their code reviewed but won’t go over the design before starting to write code.
I still think you could have multiple levels of skill across design and code implementation though
People deride design in this forum sometimes even.
Our profession doesn’t really know what it is, and that makes us easily manipulated.
I used to think I was a bad designer, because I often have to redesign things. Then I found folks who don't even do that...
i love opening up diagrams.net and working on designs. i think its possible one of my favourite things to do as a programmer. possibly more than actually coding.
>> but are not given time to do good design
Most professionals have to wrestle with time constraints. Push hard enough and at some point the carpenter/doctor/civil engineer/whatever firmly says “no”.
What’s the difference in software that unbounded tech debt is permissible?
Clients regularly tell carpenters to “just do X” against the professional’s better judgement. The carpenter isn’t going to call the collapsing Jerry rigged staircase tech debt, instead they tell the client “no, I won’t do it”.
Our profession generally lacks sufficient apprenticeship. We could learn a thing or two from student doctors doing their rounds.
> Clients regularly tell carpenters to “just do X” against the professional’s better judgement. The carpenter isn’t going to call the collapsing Jerry rigged staircase tech debt, instead they tell the client “no, I won’t do it”.
> Our profession generally lacks sufficient apprenticeship. We could learn a thing or two from student doctors doing their rounds.
I'm not sure how apprenticeship would solve this problem in software. To me, the difference seems to be that unlike carpenters, most people in software don't work on a contract basis for specific clients, but as an employee of a specific company. We don't have the authority to just refuse to do what we're told, and even in fairly good workplaces where you can voice technical disagreement without fear of repercussions, at the end of the day you'll often get overruled and have to go along with what you're told.
At least the doctors’ difficult, lengthy, and expensive credentials are fairly relevant to their apprenticeship experience. I don’t give CS degrees the same benefit of relevance.
Harsh, but largely true. But is it academia that isn't working on things relevant to practitioners, or is it practitioners ignoring academia while chasing hype and frameworkism?
> People are very capable of good design
When did people learn good design?
> People are very capable of good design but are not given time to do good design.
So they're only good in theory given infinite time, but not in the real world where someone's waiting to be able to use what they're working on?
Who said anything about infinite time? What the poster (you're responding to) meant was due to the nature of our profession's leniency about tech debt and "go go go" push from non-tech (PM, SM etc.) it's always lesser time than needed.
Yeah saying someone is only competent when given literally unbounded time is equivalent to saying they are not competent in the real world… where people have a finite amount of time.
There are multiple factors, all pointing in the direction of complexity.
Avoiding the hard challenges of design at any cost is certainly a factor. I've seen design demonized as waterfall, and replaced by seat-of-the-pants planning almost universally. "Design" is the opposite of "Agile" in some minds.
Time crunches and the "move fast and break things" mentality results in broken things (shocked!). Keeping a sub-optimal system running smoothly requires an investment in complex workarounds.
Customers will always bias towards new features, new buzzwords, and flashy aesthetics. They don't innately understand the benefits of simplicity - they assume more/new is better.
Software developers want to keep up with the rapid pace of technical change; they are intrinsically motivated to adopt newer technologies to avoid getting stuck on a dying career path. New technologies almost always layer on new abstractions and new dependencies - increased complexity is almost guaranteed.
Finally, we're advancing the state of the art of what computation can achieve. Pushing the boundaries of inherent complexity is effectively the stated goal.
All factors steer us towards ever-increasing technical complexity. It takes a force of nature (or really abnormally disciplined, principled engineers) to swim upstream against this current.
The Draeger's jam study, conducted by Sheena Iyengar and Mark Lepper in 2000, suggests that consumers are more likely to purchase when faced with fewer choices. When a selection of jams was reduced from 24 to 6, purchases increased significantly, illustrating—allegedly—the "choice overload." This ostensible paradox suggests that while complexity attracts attention, simplicity sells.
Is reducing 24 to 6 "good design?" The study controlled for the actual quality of jams.
Especially for frontend development and "enterprise" software. Simplicity often seems to not be part of the vocabulary.
The mindset is that simple is easy and easy isn't worth much, if any, money.
Of course, complexity isn't intentionally introduced for sales.
What happens is that features new features are added willi-nilli and these take priority over the quality of the overall product - see the triumph of MS Office in the 90s and many other situations of software companies competing.
And the companies have their priorities and their hiring and management reflects these priority even if it's just implicit in what's done. Especially, if you let older software engineers go and push the youger workforce constantly with constant fire-drills etc, and , no one will be "capable of good design" but why should they be?
I know plenty of people who can’t design for shit, but I don’t think that’s the start or the end of it. It’s a lot of discounting the future and an uncomfortable amount of Fuck You, I Got Mine. People either hurting their future selves and not seeing the cycle, or hurting other people because they deserve it for not being smart (they’re smart, they just don’t find your ideas as fascinating as you do)
Any tips on getting better at design?
Start paying attention to the things that bog you down when working on code, and the things that your users (ought to) expect to be easy but that are inscrutably difficult to achieve with your existing codebase. Find high quality open source projects and study them. Read books (eg. Domain driven design [distilled]). Stay somewhere long enough to feel the impact of your own bad design decisions. Take some time occasionally to pause and reflect on what isn't working and could have been done better in hindsight. Consider whether you could have done anything differently earlier in the process to avoid needing hindsight at all.
Sometimes it helps to look at commit history as well and ask how we got here.
yep, agreed. I think that's another way to view the current state of "GenAI" tooling (e.g. all those complicated frameworks that received $M in funding) and why things like https://www.anthropic.com/research/building-effective-agents fall on deaf ears...
But it does sell well if you frame it right, in performance reviews.
How microservices are still a default systems design architecture in anything but the largest orgs still puzzles me.
I feel the same way about most cloud-native services.
Sure, Lambda is fine for that small app, but I once inherited a 100k/month mess of SQS, Step Functions, API Gateway, Cognito, Lambda, ECS, AppSync, S3 and Kinesis that made me want to go into carpentry.
It wasn't simple, it was't quick to make, it wasn't cheap, it wasn't fast, and no: it did not scale (because we reached the limit of Step Functions).
Unless you've asked for a limit increase _multiple_ times, I can guarantee you haven't reached the limit of step functions.
The default limits are _very_ conservative in large regions
(Admittedly, by the time you've asked for those limit increases you should probably reconsider what you're doing, you're bleeding $$$ at this point)
When I was growing up there was a shop a couple towns over that didn’t have better prices than the local one but he would have discounts that made people feel good and so they got suckered into driving a half hour away to get bilked. Even my dad who initially complained.
Feeling like you’re getting a special deal overrides objective thought. There’s a bunch of this stuff in AWS and it all feels dirty and wrong.
Conway's Law was written about 57 years ago.
Theoretically, microservices allow for each team to deploy independently, thus the choice is made up front, before any part of the system is designed, because it looks like it reduces the effects of inter-team communication lag.
i.e. Docker lets you better push the org chart into production.
It makes figuring out that the boundaries of responsibility in your app/org are poorly defined harder to address.
The biggest place I ever worked, I came to believe that their chaos worked because it was self organizing. They’d split a large project into parts, and the groups that didn’t work well would find the boundaries of their mandate constantly eroded by their more capable neighbors upstream and downstream from them. Eventually all the gaps would fill in, which is why the company worked. But it meant many orgs and applications did work that would have made more sense to be done at a different step in the process, if not for incompetence/bandwidth. Things would happen here or there not because of some waterfall design but because of where the task was in the development flow and who had more bandwidth at the time.
They kept a lot of old guys around not because they were effective but because they were historians. They knew where the bodies were buried, and who was the right person to ask (not just which team but who was helpful on that team). We had a greybeard who basically did nothing but was nice to be around and any time you had a problem he knew who to introduce you to.
> We had a greybeard who basically did nothing but was nice to be around and any time you had a problem he knew who to introduce you to.
This is absolutely a feature and this guy probably deserves his salary.
> Theoretically, microservices allow for each team to deploy independently
You can still do that with a monolithic codebase. A Google team published a related paper: https://dl.acm.org/doi/10.1145/3593856.3595909 > When writing a distributed application, conventional wisdom says to split your application into separate services that can be rolled out independently. This approach is well-intentioned, but a microservices-based architecture like this often backfires, introducing challenges that counteract the benefits the architecture tries to achieve. Fundamentally, this is because microservices conflate logical boundaries (how code is written) with physical boundaries (how code is deployed).
I always view it as a very good sign when senior leadership is aware of Conway’s Law.
Because it gives teams the illusion of fast progress, without being burdened by pesky things like “a consistent API,” or “not blowing up shared resources.”
> How microservices are still a default systems design architecture in anything but the largest orgs still puzzles me.
A system that's made out of smaller single-purpose programs that are all made to be composable and talk to each over a standard interface, is not exactly an unproven idea.
Composable single-purpose modules that communicate over a standard interface can be more easily achieved without involving a network and the complexity that comes with it.
IMO, there are only a few cases where the added network traversal make sense.
1. There's some benefit to writing the different parts of the system in different languages (e.g. Go and Python for AI/ML)
2. The teams are big enough that process boundaries start to form.
3. The packaging of some specific code is expensive. For example, the Playwright Docker image is huge so it makes sense to package and deploy it separately.
Otherwise, agreed, it just adds latency and complexity.
The level of reductionism of that comment is honestly quite amusing given the topic. Maybe we can use it as an unintended warning of not going too far in the pursuit of simplicity.
Separation of concerns is the false promise of all these so-called "architecture patterns." Their advocates make you believe that their architecture will magically enable separation of concerns. They offer blunt knives to make rough slices, and these slices always fail at isolating relational concerns, inviting entirely new layers of complexity.
You had a relational database, designed to store and query a relationship between a user and their orders. Now, you have a user management service and an order service, each wrapping its own database. You had a query language. Now, you have two REST APIs. Instead of just dealing with relational problems, you now face external relation problems spread across your entire system. Suddenly, you introduce an event bus, opening the gates to chaos. All this resulting madness was originally sold to you with the words, "the services talk to each other."
Who ever claimed that REST services compose well? Because they can "talk to each other"? Really? Only completely disconnected architects could come up with such an idea. REST services don’t compose well at all. There aren’t even any formal composition rules. Instead, composing two REST services requires a ton of error-prone programming work. A REST service is the worst abstraction possible because it’s never abstract—it’s just an API to something extremely concrete. It doesn’t compose with anything.
Microservices aren’t micro. They’re hundreds of large factories, each containing just one small machine. Inputs need to be packaged and delivered between different factories in different locations, adding complexity every step of the way. This is what happens when enterprise architects "rediscover" programming—but from such a disconnected level that the smallest unit of composition becomes a REST API. Rather than solving problems, they create a far larger problem space in which they can "be useful," like debating whether a new microservice should be created for a given problem, and so on.
The same critique applies to "hexagonal architecture." In the end, with all of these patterns, you don’t get separation of concerns. The smallest unit of the architecture was supposed to be the isolation level where your typical problems could be addressed. But your problems are always distributed across many such units, making them harder to solve, not easier. It’s a scam. The truth is, separation of concerns is hard, and there’s no magical, one-size-fits-all tool to achieve it. It requires significant abstraction work on a specific, concrete problem to slice it into pieces that actually compose well in a useful and maintainable way.
Because microservices have a granularity that allows a sort of distinction as an architecture that a big ball of mud cannot provide. The sign that the design is bad in the first case is that the services are far too chatty, but that is not a bright line distinction: it is always subjective if the services are chatting too much or not, when is the messaging some kind of messaging spaghetti? The mere fact that you developed your monolith into a big ball of mud is bad design manifest. So microservices make it harder to identify bad design. Designing a modular monolith from the ground up will feel like overengineering to many, until you arrive at the big ball of mud and it is too late.
Simplistic is often sadly seen as an effective replacement for the difficult achievement of simple.
Most of the strange things in the software business can be explained by the combination of
1. susceptibility to fads
2. path dependency,
or, to borrow a term from evolutionary biology, punctuated equilibrium.
I feel the same way about SPA.
At work the decision was made to rewrite it all in React because it was supposedly easier to find people who knew React, instead of any good product fit.
Easy decision to make if it's not your money your spending, I guess.
Mostly because people are isolated from the consequences of their shitty architectures through the joys of being employable somewhere else.
microservices is about workforce rotation more than anything else.
Was it Bernard Shaw who wrote something to the effect of 'if I had more time I would have written a shorter letter'?
Whoever it was, I think the same holds for software: creating simple software is harder than making complex software.
Usually attributed to Blaise Pascal:
Quoting from https://quoteinvestigator.com/2012/04/28/shorter-letter/
"The French statement appeared in a letter in a collection called “Lettres Provinciales” in the year 1657:
"Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.
"Here is one possible modern day translation of Pascal’s statement. Note that the term “this” refers to the letter itself.
"I have made this longer than usual because I have not had time to make it shorter."
This quote is often attributed to Mark Twain and Benjamin Franklin as well. In one form or another, it actually goes back to Cicero, who I believe is the earliest writer of this idea.
This is all covered in the Quote Investigator link given.
Certainly it is documented as appearing in Pascal's writing, and both Twain and Franklin postdate that.
Certainly if Cicero said it then that would be earlier, but while it is (later than Pascal) attributed to Cicero, there are no writings quoted by Cicero as containing the sentiment.
Again, all this is in the lunk page, so I'd be interested if you could provide an earlier reference.
Do you have a reference to Cicero's writings where he says this?
This appears in his dialogue On Oratory, and the attribution of the modern version of the direct quote to him is a bit of a misattribution because he doesn't say it (ironically) in so few words.
The parent I responded to gave the exact Pascal quote, but it has been given in many forms by many writers, all of whom had likely read quite a bit of Cicero.
Martin Luther also talked the same way about his sermons far earlier than Pascal.
The well-known Shakespeare quote, "Brevity is the soul of wit" is also another (loose) translation of a passage from On Oratory.
Twain and Franklin were born after Pascal's death. Which specific quote of Cicero did you have in mind?
If you want it to sound intelligent, attribute the quote to Benjamin Franklin.
- Benjamin Franklin
Another great Dijkstra essay:
On the foolishness of "natural language programming".
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
For all of E.W. Dijkstra's brilliance, he had a real problem with misrepresenting opinion as fact.
I am not a computer scientist but I acknowledge Dijkstra is an honored name in that field. Yet for his shots against academia and industry, he signed it 'prof. dr.' and 'Burroughs Fellow.'
To put it charitably, Dijkstra was unusually sure that his opinions were objectively correct.
My views on Dijkstra have soured over the years. He now represents a high priest of a "discrete mathematics" view of computer science which has wreaked a great philosophical mess over the whole project. He, childishly, associates complexity, materiality, and the human interface with "profit" -- it is by no means profit at all -- it is just a puncture to his platonistic circumscribed project.
Personally, I'd prefer if everything he represented was properly demarcated by 'mathematics', leaving its complex, material, physical realisation to 'computer science'. The failure to do this has indoctrinated a generation of people into a mysticism I'm not found of, to say the least.
What does "materiality" mean here?
That syntax has a semantics -- or, if you prefer, that all useful algorithms have operations which require devices
Computer science is not constructive mathematics -- it is not mathematics at all, since `f(x)` means the spatio-temproral state `x` is operated upon by the IO/device action `f`
> Hence my urgent advice to all of you to reject the morals of the bestseller society and to find, to start with, your reward in your own fun. This is quite feasible, for the challenge of simplification is so fascinating that, if we do our job properly, we shall have the greatest fun in the world.
I'm pretty much the polar opposite of djikstra, all application almost no theory, but he was a real one...
> Hence my urgent advice to all of you to reject the morals of the bestseller society and to find, to start with, your reward in your own fun.
Honestly that sounds pretty nice.
But it is a false dichotomy, and a tragic one at that.
More constraints can push us to find the better solutions. In our work. And in our life too. [0][1]
[0] https://en.wikipedia.org/wiki/Ikigai
[1] https://www.japan.go.jp/kizuna/_src/7994686/ikigai_japanese_...
So, the nature of computing science should be to have fun. I like that idea, in theory... the problem is, "fun" rarely pushes one to do the really hard work needed for significant improvement, that isn't fun.
The hard sciences seem to lead to more real-world applications quicker. Software science only seems to advance when used by tech companies to sell ads. But there's not that many applications for software to perform that function, so there's not really that many material improvements.
They keep coming up with new ways to advertise (who'd have imagined an interactive navigation map that advertises burgers?). But the computer technology that controls the lives of the common man has not progressed much past the 90s. The hardware has gotten denser, sure, but the software has bloated at the same pace, without providing a significantly improved or different user experience. It's still just clicking windows and paging through media, with basically the same software working the same way, just re-written 20 times over.
These new forms of generative AI certainly have the capability to sort out information more efficiently, and skip a lot of the more manual programming required to provide features. But AI was never necessary to take a prompt and turn it into an action, as all the car nav systems in the world have shown for years. Yet for some reason I can't quite fathom, only cars have audible user interfaces? And we traded tactile interfaces for glass screens... because it's prettier?
I don't care about simplicity or complexity, any more than I care about how antibiotics are produced. I care that I can take a pill and get better.
Similarly, it would be great if it were just a little bit easier to do simple things, like check my bank statement, without worrying about "cyber threats", or jumping through hoops when the next password replacement fails, or having to upgrade my app for the 3rd time this week before I'm allowed to check the bank statement, or having to click through offers for yet another credit card, or navigate a complex tree of "features" that varies from app to app, and month to month. I just want to read my god damn statement.
I don't know if the philosophy of producing this technology will ever be resolved. But I've stopped caring. The state of computer science today is, I've given up hoping for something advanced. I'll settle for something that isn't painful.
Watch out, sometimes simplicity carries a higher time cost... creating it.
Systems consisting of hundreds of billions of floating point weights whose internal workings no one can understand, sell very well. So he was on point here.
A though-provoking essay that makes me think "yes, that's exactly right," again and again, whenever I re-read it. Highly recommended.
Please consider using the original title: is "On the nature of Computing Science."
The essay is about much more than simplicity versus complexity.
It is a short paper well worth reading in full. The full quote is;
Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.
Another good quote is;
To which we may add that, the less the new science is understood, the higher these expectations. We all know, how computing is now expected to cure all ills of the world and more, and how as far as these expectations are concerned, even the sky is no longer accepted as the limit.
The analogy raises, for instance, the questions which current computer-related research will later be identified as computing's alchemy, and whether this identification can be speeded up,
Describes the current ML/AI craze perfectly.
Not just true of code but of everyday life. Just look at our financial system : extreme complexity, designed to exploit the masses and benefit the few, it still sells.
Incredibly well-written. Not all of his opinions stood the test of time, but a pleasure to read nonetheless.
Wonderful quote! Thank you! Somewhat dealing with "frontend fatigue" right now. This totally hits home.
Inspirational. But I think it's almost impossible to create simpler systems, especially the ones that get updated very often. Sure, in the beginning, it would be elegant but down the line, the cost of elegance keeps increasing and most people will trade if off with complexity
Yes, let’s all just give up and stop trying.
I feel like this is philosophy in a nutshell.
I have a weird hypothesis about the “worse is better” phenomenon which is what I think he’s getting at.
In back office / cloud / IT type stuff I wonder if complex things like Kubernetes win over simpler approaches precisely because they are more expensive and labor intensive. As a result of being labor intensive they pick up more users who after investing in climbing their learning curve become champions. Simpler or more “fire and forget” systems require less labor and so win fewer converts.
I've been doing mostly backend dev, and watching from the sidelines as buzzwords come and go: vmware, virtualbox, puppet, vagrant, ansible, zookeeper, mesos, docker-compose, chef, etcd, docker-swarm, terraform, helm. I don't know what half of them do.
But honestly two of them stand above the others: docker and kubernetes.
Docker is what your program is, and kubernetes is somewhere for it to live while it's running.
What are the simpler, more fire-and-forget, approaches that you have in mind as alternatives to Kubernetes?
Complex software is legacy software because you won't have enough money and efforts to keep it more complicated.
Ironic given the complexity of Dijkstra's various algorithms.