This article represents the death of excellence and craftsmanship. Its the McDonald's-ization of software development. McDonald's does make the most consistent "food" in the world. But nobody would say it makes good food. No chef would ever want to work preparing food at McDonald's.
The author pointed out many good ideas - the problem is the extremism - the author clearly hates software developers and the complexity of software development (a common trait among management), and wants to eliminate the complexity and variability inherent to dynamic human creation. Once you've taken all the creativity and joy out of it, and distilled it down to something any monkey or even ChatGPT can do, yes you've achieved such a system, and it's a quite dreadful place to work.
A less extreme version of what the author proposes can enhance creativity and net velocity - but these systems need to be in balance, supporting and enabling the creative process, not eliminating it.
The best advice I've seen on the issue is "Don't scar on the first cut", as in you shouldn't try to add a new rule every time you have an outage.
That being said, I absolutely hate heroes.
I've worked with a couple that get their thrills from the adrenaline buzz of swooping in and fixing the big problem ... and walking away. They don't put the work into documentation or making systems resilient because that's boring. I like boring. Boring means I can clock off at the regular time and not think about work until the next day.
> I absolutely hate heroes
I dislike the heroes' bosses. They're to blame for the situation. Managing a team of engineers is not like being a dungeon master.
You don’t know what heroism is. Read the literature of heroism before you decide you hate your fellow humans. Each one of us is capable of heroism.
And a system that requires it if it’s employees to function is fundamentally abusive
The article is not about real heroism though. It’s about workplace “heroes” in the sense of diligent, creative, productive workers who have to take personal responsibility for the success of the business despite lacking support from said business.
Ah, now the Natural Born Heroes heroism and the virtue of Xenia (ξενία) I can get behind.
It's the ego-inflating ""heroes" in scare quotes" that crave validation that I find so exasperating.
But the world has enough room for both McDonald's and the farm-to-table restaurant. They serve different markets and fulfill different needs. There is room for both Big Agriculture monocultures that have industrialized agriculture and the Polyface Farms intensive farmers who are both happier and more productive and profitable per-acre than the monocultures. But large enterprises, for all their cookie-cutter, cog-in-the-machine dehumanizing process, are great places for many folks: for the early career folks who are just getting started on the experience ladder learning professional development and get lots of benefit from all the guardrails, for mid-career professionals starting families who need to tread water in their career while they focus on their young children at home, for late-career elders who prioritize both stability and the late-career paycheck to help get them into a comfortable retirement.
The macro goal though isn't to attempt to deny the benefits of process in large enterprises, it's to promote more productive small business. There are natural benefits to scale in large enterprises, but in software, many of those benefits to scale are not inevitable. Outfits like WhatsApp succeeded at building immense scale with very few engineers. It may be a bit of survivor bias, but small businesses can outmaneuver large enterprises when they have more productive talent that is not hemmed in by all the safeguards that larger enterprises are required to have. But this is still separate from the fact that each have their place.
I’m a permaculture designer and have run a small vegetable farm. I have visited Polyface twice and spoken personally with Joel Salatin both times. I’ve read several of his books. I think that Polyface shows up in this discussion very much on the side of the systemic safeguards the article recommends. His farm is full of systems and safeguards a great many things by design. He plans for every risk he can think of and adapts his system in response. He got the farm from his father and has largely transferred it to his son, and they have built a business training both interns and the interested public in their methods.
Safety measures ≠ world-dominating industrial scale.
Sorry, I wasn't trying to claim that small businesses don't need any safeguards. For example, small businesses also need backups, they need TLS certificates, lots of industry-standard stuff. What they don't necessarily need are everything required by the checklists that are thousands of items long where every box must be checked to pass SOC 2, ISO 27001, FedRAMP, etc.
I agree that the author’s proposal is too extreme, but some of their advice is good.
I think any company wanting to scale to a significant size will need to learn to implement effective systems rather than rely too heavily on the efforts heroic individuals.
However, what the author is proposing is the death of evolution in systems, they seem to think that an optimally efficient and effective system already exists and everyone should just be using it.
This is not true because your competitors are probably always looking to change their own systems such that they can out manoeuvre you, unless you respond by adapting your systems, you will likely be out competed.
My company’s approach is to progressively systematise everything reasonably possible, but it also has the philosophy that any existing system can be challenged for change or removal.
We regularly have retrospectives primarily to try identify how we should change our systems. We never try to change too much at once, but over long periods of time our systems have tended to towards being highly effective.
But understanding and identifying highly effective changes (beyond the low hanging fruit) requires exceptional individuals.
We will always need heroes prepared to challenge existing thinking in case there is a better way.
This comment combined with a recent reflection on the ‘irony of automation’ makes me worried that the product I’m building to make life easier will instead lead to loss of craft, pride, and ingenuity.
Honestly, still processing through it to make sure that all the people in my products and systems are seen.
It might sound redundant but I do think there can be an achievable balance. I worked at an organization that was allergic to documentation and safeguards of any kind and relied fully on certain talents to resolve issues. This was not just incredibly inefficient but also caused many problems when some of these people left the company. I think some safeguards and consistent documentation would have avoided the majority of this. And the talented individuals would also have had more time to work on other things, learn new skills and get even better at crafting in a more broad sense instead of only playing fire department.
And finally, I can reassure you that there will always be room for “individuals” and “heroes” to do their craft and also save the day. Most systems don’t run like clockwork, there are always bugs, weird failures, strange oddities, most systems will need saving at some point. So, creativity and craft will generally have a place and value.
Spot on. The comparison between a skilled software engineer and a skilled assembly line worker is a flawed one.
The game of the assembly line worker is mass production; the impact of their skill is limited to a subset of all units and their contribution to each unit is also limited.
A software engineer is more like a civil engineer or a surgeon. Their work affects the entirety of whatever they're working on at a foundational level. The value of the unit they work on is extremely high. Also, a software engineer doesn't assemble things; they plan and design things. Their impact outlasts their tenure by many years. Just like a surgeon's impact extends far beyond the operating table. Once the software engineer has quit the company, the code which they wrote is still crunching numbers and still earning income for their ex-employers while everybody sleeps.
It's not just their code which lasts, it's the entire philosophy which they breathed into the product and their team which lasts as well.
> The comparison between a skilled software engineer and a skilled assembly line worker is a flawed one.
And the difference is that we engineers design and maintain the assembly lines, as opposed to being line workers.
I agree wholeheartedly. I’ve seen this philosophy take over a software organization (Ericsson way back) and the results are dreadful. But it’s not only that the org turns into a dreadful place to work. You also lose in the market to competitors with a more creative culture. Ericsson could easily have been the AWS of the world and the creators of Android. Now it’s like… a shadow of its former self.
(There are other reasons too of course, but I think this “process over people” philosophy was a major contributing factor.)
Yes. Focus on building all your team members to heros instead.
The other way around is possible, but you will end up with a team of 1000 people doing the same work as 10.
BTW, I am currently working in an enterprise with a small team mixed with experienced developers (heros - but still always learning because of new complexities) and new developers (heros in becoming).
Absolutely fantastic and we create wonders, but it requires management to acknowledge skill and exceptionalism.
If creativity means allowing people to be creative with dangerous language features that they themselves probably don't fully understand and write code in a way that nobody else understands, then I would want none of it.
Having working and maintainable software is a lot more important than having a team member showing off what they are able to do with eval, they can put that into a demo side project.
That's not creativity, that's being amateur
I think the author gets it utterly wrong.
My argument would be - greater the complexity, higher level of personal initiative and responsibility is needed.
This is not an arm-chair assertion. For example - read the works of Admiral Rickover - https://govleaders.org/rickover.htm - in buidling and managing nuclear facilities
You'll see the enormous focus on individual initiative and personal responsibility.
Heavily regimented processes are too brittle for dealing with complex problem-solving end of the day.
In fact, one way I differentiate sophisticated leader from a less sophisticated one is via the understanding of this fact: do they value individual initiative or not?
If they do not, they're not in a great position to lead complex initiatives.
Spot on! It seems to be one of those things where the left and right sides of the bell curve both value individual initiative, creativity and responsibility, while the middle appears to believe systems are a panacea.
Once you've built enough systems and seen the tradeoffs and understood the raw high dimensionality of the problems, one may come full circle and realize yes, good systems are great, and some are table stakes, but only insofar as they add rather than subtract value relative to humans who are invested with great personal responsibility, mastery and creativity.
Rather than turning everything into a dreadful machine where humans are but a cog (ala the matrix), build systems which are more like force multiplying exoskeletons (edge of tomorrow)
I agree that "lived experiences" in highly regimented environment can be extremely harmful.
But my point is on the efficacy of it. I think regimentation doesn't work well in fields where there's complex problem-solving involved.
That's what practicing a profession stands for - that you cannot be dictated by laymen, etc and that you're held accountable to results.
I mean, hospital management cannot come in and dictate how the surgeon should wield his knife (but they can & must keep track of his success rate, and question if they see red flags). Or lay administrators cannot come in and dictate how a dam should be designed or what calculations are to be performed to get a working engine.
Regimentation works only for the simplest of things. Beyond that - it's hopeless.
I think we are violently agreeing?
A hospital could try to dictate such things to a surgeon at its absolute peril. No talented surgeons would work in such an environment.
Its a straw man because it's absurd, but there are some tech companies which do cross the line of dictating things that are genuinely hopeless and pointless to dictate, in the mistaken belief that systematic uniformity is better. And they do progressively pay a steep cost for doing so.
The author's being a bit hyperbolic, but the point stands that systems should be built to prevent or at the very least mitigate damage due to human error, because human error will ALWAYS happen (even to the heroes).
No, this isn't about bringing on cheaper, less skilled labor; it's about making a system accessible so that people can work on it without fear.
You STILL need those heroes, those masters of the system, because they're the only ones who can not only keep the innate and unavoidable complexity of the system in their heads, but can also expertly manage and improve the human interface simplifications and safeguards that prevent catastrophe.
We've done this in other industries, and yet there's still so much pushback when it comes to software engineering...
The author seems to try to enforce programming constraints in order to fix one problem they identified, to fix all related problems that may occur in future.
It is obvious that he doesn’t have experience with long living systems, behause this approach leads to inflexible software. For example: sometimes memoization of react hook returns is sometimes good, but never always. He also values processes much over individuals - well we had a manifesto against such views…
Future proofing, assembly-line mentality, knows-it-all thinking - I feel sorry for those he manages and the company when they’ll become unable to add features at all.
>However, we had a long discussion with one programmer who insisted it was a bad idea because of "premature optimization", the fact that React docs does not have recommendations about it, and "nobody does it that way". In my opinion, this was a clear case of a programmer resisting the system approach because he wanted to spend time fixing the same problems over and over and pretend to be working hard.
This article and advice is awful. The fact that the author ignores completely reasonable arguments made by the dissenting programmer, instead attributing their 'resistance' to some dubious, made-up notion that they would prefer to do meaningless rework, speaks volumes.
I think it’s worth also quoting the rule that the dissenting programmer objected to.
> One of the solutions was to introduce a rule to memoize everything returned by hooks, to automatically prevent future problems with memoization.
… which introduces a systemic performance penalty to prevent outlier performance problems.
The article argues against “heroes”, but I think the more appropriate term is “experts”.
Indeed, Dan Abramov is pretty expert at React and he has a compelling article which advises against memoize everything. Before you memo https://overreacted.io/before-you-memo/
React will soon do it that way. React 19 will introduce a compiler that automatically memoizes all components and hooks.
Not the same as manually implementing it in every component as a rule.
This is not at all an accurate description of the compiler work.
Enterprise programming is the management of labour costs, aka lowering software developer wages. Prioritizing systems is a way of pursueing this objective, while introducing less prepared people to work trying to limit their consequential mistakes.
I find it quite telling that the author quotes the "fundamental attribution error" early in the article, and then spends the rest of the article making textbook examples of exactly that.
> Enterprise programming is the management of system complexity. The main goals of most enterprise projects are to minimize bugs, ensure scalability, and release as soon as possible. These goals are unreachable in projects where people rely on individual skills rather than on a system-based approach.
Whenever I see anything about "enterprise ..." I'm pretty sure they're holding it wrong. These are blanket statements about a hand-wavy category presented with a unhealthy dose of false dilemma.
The real point is "blame processes not people" and fix the processes. As for autonomy, the goal should be to give as much as possible, but only as much as your processes can effectively/safely allow.
> However, we had a long discussion with one programmer who insisted it was a bad idea because of "premature optimization", the fact that React docs does not have recommendations about it, and "nobody does it that way". In my opinion, this was a clear case of a programmer resisting the system approach because he wanted to spend time fixing the same problems over and over and pretend to be working hard.
Author has some serious issues he needs to work on. Also probably should delete this post because it looks embarrasing.
He wanted to memoize everything returned by React hooks as a kneejerk response. I don't even have anything to compare it to. It is plain silly.
The degree of specialists that current tech stacks encouraged is not helping prevent creation of such heroes.
It’s no wonder that we now have software developers working under the assumption that one person can run the whole show.
Surely, this article is lampooning the idea that with enough rules and regulations, skill and experience don’t matter.
For a contrasting argument, I recommend reading Programming as Theory Building by Peter Naur.
Ralph Waldo Emerson is as relevant as always: "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."
the example the author gives in the end is such a low level bad programming advice that it’s not possible to take the rest seriously
No, build heroes. Then the heroes will build systems as they see fit.
Build societies, and then the societies will build heroes.
I am a humanist. I will agitate and undermine any initiative or attempt to dehumanize our systems or systemitize dehumanization.
A hero is not some extraordinary person. A hero is an ordinary person who, in the ordinary way humans do, transcends mere established procedure and solves an ambiguous problem— a “wicked” problem— for the benefit of others. That’s a hero. A hero is not a six sigma person among other people, but rather unique among any set of non-human systems
Keeping enterprise software behemoths on better life-support is no longer a viable strategy for most ... much of it is vulnerable to generative and predictive AI powered incarnations of whatever it was originally built to do ... retro-fitting will, in most cases, not be as competitive as rebuilding using a completely different architecture
This is the early 1700s for enterprise software and we are witnessing the equivalent of discovering the steam engine -- almost all will need to be rebuilt in next 15 years and that requires the inverse of a cookie-cutter engineering approach as optimal patterns are yet to be established
I agree 100%. And in fact it's one of the principles of the devops culture. Automation, trust, light processes, early feedbacks.
Companies that relies on heroes and constant firefighting to keep running are doing it wrong. Very wrong.
> If you have strong enough rules, you don’t need to worry about the skills of the programmers.
lol. lmao, even.
Followed by
> In my opinion, this was a clear case of a programmer resisting the system approach because he wanted to spend time fixing the same problems over and over and pretend to be working hard
Which is funny when the first bullet point under "skills" in the author's resume [0] is:
> Team player mentality
I love working with "team players" like this.
[0] The "About me" page in website: https://latexonline.cc/compile?git=https://github.com/vitons...
If I were the author of this blog post, I'd be embarrassed. This is plain embarrassing.
I didn't know about latexonline.cc. TIL. Thank you :)
Indeed. That strong rule at the end where he want to memoize everything made me laugh out loud. What could ever go wrong with such a rule..
Sounds like magic
[laughs in job security]
Alternatively - if you're that 10x oil rig worker, make sure no one ever does this, because then you're turning from a hero to garbage
This article is a joke, but a joke cruelly missed by it's own author.
Point in case: holding 'enterprise programming' as a paragon of software craftsmanship.
A realization I've had, seeing true 'enterprise programming' at work, is that most of the time processes acts like pass-band filters for quality: on the lower side some sort of garbage is avoided by adherence to processes, and on the upper side true software excellence is made impossible, or at least way harder than it ought to be.
A few examples:
- A simple CRUD app got a less-than-ideal data-flow, with routing and data-passing between pages were treated as an afterthought. The reason? The (Scrum) process-heavy organization understood work solely as defined through JIRA tickets, and they naturally created 1 JIRA ticket per page and per component, and of course heavily parallelized the work between many coders. As a result, each page matched the specs, but the overall data architecture fell through the cracks.
- We had one member of a previous team who alternated between 'okay' to '-1x coder'. His main issue is that he didn't really thought about the impact of his code, he just kinda appeased the gods of linting until they were quiets. The issue is that it was often just breaking stuff, or straight up putting deceitful typing in a way that would have spread in the applications. Automatic checks were all in the green, but everything was subtly wrong.
- Over-engineered types in Typescript. Some (usually junior) ambitious programmers see the power of types, and dream of creating types so perfect that the whole program will be created just by pressing 'tab' at the end. It sometimes work, but it also often create ungodly monstrosity where a KISS approach would have been less painful. We've all went through this stage, but some people just saw "The sorcerer's apprentice" section of the 1940 Disney movie 'Fantasia' and then stopped midway, too busy automating everything.
- Over-reliance on outside library, and a organizational incapability of creating anything remotely custom. Yes you should avoid reinventing the wheels, but if your job is also to make wheels, and none of the ready-made wheels fits the situation, then you better be making wheels sometimes.
- Over-reliance on fads and marketing. Process-heavy environments penalize individual initiatives. As a result, the "way everybody do something" becomes the only possible way, and this is very susceptible to fads and content marketing. And your soul dies a little when you cannot stop the "Sure, Redux seems very heavy for the needs of this project, but Redux is very powerful! That's why it's in every project!".
In general, processes and systems attracts 'system people'. They can be useful, but too much will just create a dogmatic, un-pragmatic work environment where business consideration are swept under the room.
very good examples pretty much every one I've come across, the over-engineered types one (add python typing to the list too) feels pretty spot-on and common
When the engineering department of a company is not the major lever in the company's growth this philosophy might be fine. That's the case of many large enterprise and probably many startups too.
When the competitive advantage is not the technology, it is probably smart to make the engineering department dumb.
Clearly that leaves the company exposed to challengers who will outcompete them with technology. But not all markets work like that.
In contrast to some of the other commenters here, I mostly agree with the points made in the post. However, I will argue that it's very much possible to end up in environments where even these suggestions are infeasible, for a plethora of reasons.
> For example, instead of trusting that programmers won’t push to the master branch, the repository owner can enable an automatic rejection for any attempts to push directly to the master branch, except through a pull request. This is a great example of a common practice that prevents potential problems. Programmers may not want to push to the master branch, but they could do it accidentally, as they are only human. I have personally caught myself doing this at times.
Even outside of pushback ("pull requests introduce friction") someone could hypothetically just turn the functionality off, limiting access to that may or may not be possible with the tier of SCM solution that you manage. Even if it's possible, then you might find yourself seeing people create pull requests that they themselves approve/merge, for a concrete example: https://docs.gitlab.com/ee/user/project/merge_requests/appro...
Want to make those mandatory? Suddenly your org needs the paid version of GitLab. Guess that's not happening because at that point you're not up against just resistance from some developers, but against the org itself which would have to fork out a bunch of money and you'd need to justify it, while having few/no people backing you up.
> Improve workflows, not individuals. Make the process the lever to apply effort. Identify problems that occur systematically, introduce processes to resolve them, and prioritize process over people. Processes work like tests in programming. When you find a bug, it’s not enough to just fix it because the same bug could appear next week when someone else makes changes to the code. To properly fix a bug, the programmer must add a test to ensure the issue will caught if it will reproduce again.
Sometimes people just don't care about the idea of improving the processes, because they see it as additional work that they don't benefit from themselves.
Even linking together pull requests with the source control (because, again, you couldn't get an integration between GitLab/Jira working for the same reasons as above), things like adding basic documentation to the code to explain why it works a certain way ("code should be self-documenting" ignoring the fact that you lose the context of the requirements, because it only explains WHAT not WHY), things like adding examples of front end components to a playbook so others can re-use them (they'll just create ones needed for their own bit of the codebase in some package, so you'll eventually end up with dozens of duplicates), or even having helpful README and onboarding files.
The same with tests: suddenly you're up against a codebase that's actually hard to test, in addition to sometimes needing to mock complex webs of service patterns and logic that just goes all over the place. Writing tests might take you 2-5x more time than the actual code and that'd raise eyebrows - you're basically fighting a losing battle.
> Make decisions based solely on written conversations, with an overview of all possible solutions, including all known positive and negative aspects of each.
This is nice and good, but many will prefer to argue in bad faith and expect you to do everything "their way" even when it's objectively harmful to the codebase (e.g. wanting to fail fast even when that means having a field with no data, that's only used for display, break the entire form and preven the user from seeing anything at all, and saying that the input data should always be good and that you shouldn't add error handling from recoverable states or fail/degrade gracefully).
You can document all of the facts in the world, they'll just shift the burden of proof on you, disagree with everything you say and you'll get nothing done, holding up releases and making you look bad.
> Require material evidence, like measurements, to support any statement.
They won't care. I've had cases where a page needs like 4000 SQL queries to render and people still argued that the code is readable and easier to work with in the way how it was written (it wasn't) as opposed to just a few queries against a bunch of views. I showed how the performance was measurably worse and also how rewriting that code helped, but in other places in the system the same patterns are used by them.
> Set strict rules for linters to ban most dangerous language features and force programmers to write boring and obvious code.
If you enable rules a few years into a project, you'll get hundreds if not thousands of errors, only some of which will be fixable automatically. Then, suddenly the burden of ensuring that all of the refactored code works like the old code (it won't) is on you and you're basically shooting yourself in the foot. All while people will come up with all sorts of rules for how code should be formatted in pull requests, of course, all to be done manually, no code generation for common patterns across the codebase, just a lot of busywork.
I've literally had a case in the past where the codebase had a bunch of index.js files in the modules with basically nothing else in them. It'd be like "some-module/index.js" and that's it. Clearly that broke code search, since you could find nothing by the name of a file and when you were looking at where certain code is located it'd alway show that "index.js". So, I made a script that took the code, renamed the cases with single file modules into files that are named like the module, e.g. "some-module.js", updated the imports, added a prebuild script to warn about cases like that in the future and prevent that type of code from being added because the CI pipeline would then fail. It worked. It improved the navigability of the codebase. There were no downsides to that.
You know what happened? A few weeks later I saw a bunch of index.js files again, with the checks removed from the codebase, with no discussions about that and no concrete arguments about why using index.js everywhere is better. It's like people hate the idea of tooling and automation, not that writing your own custom plugin for a JetBrains IDE is easy, either.
> Once, we had performance issues due to unnecessary re-renders in a complex React project. Our investigation revealed that the problem was related to memoization, a fairly common issue. One of the solutions was to introduce a rule to memoize everything returned by hooks, to automatically prevent future problems with memoization. This idea was straightforward, simple, and stupid (that is good). However, we had a long discussion with one programmer who insisted it was a bad idea because of "premature optimization", the fact that React docs does not have recommendations about it, and "nobody does it that way". In my opinion, this was a clear case of a programmer resisting the system approach because he wanted to spend time fixing the same problems over and over and pretend to be working hard.
If you have the authority to enforce decisions, then go ahead. If you don't, then there's often nothing you can do, especially if people pull rank.
Realistically, you either get lucky to work on a project from day 1, you manage to work with a team that has views that are similar to your own (whether that's using containers for most things, or shipping by copying directly to prod from your workstation through SFTP; whatever, the main thing is your opinions match), or you're out of luck. I've honestly considered quitting places in the past due to this exact reason.
>You know what happened? A few weeks later I saw a bunch of index.js files again, with the checks removed from the codebase, with no discussions about that and no concrete arguments about why using index.js everywhere is better
How'd this story wind up?
> How'd this story wind up?
A team vote on which solution to use, an issue created in the change management system to re-implement the approach that gets rid of index.js files.
Definitely a case of conflicting ideas about software development and everyone thinking that they know better. It would have been better to have the discussion early on, as opposed to people just changing things how they want, e.g. if I just changed them back it would be as unhelpful.
Of course, I have ideas that are also sub-optimal or plain wrong sometimes, but getting those shot down in a group setting ahead of time would actually be better than implementing something bad and causing headaches for people down the road.
[dead]