Author points to TigerBeetle as an example, which I wasn't familiar with, so I went down a rabbit hole.
Wow. Yeah. If you're writing a financial ledger transactions database from scratch in Zig, where safety and performance are your top two goals, so that you can credibly handle a million transactions per second on a single CPU core, well, damn right you can't afford to take any risks on introducing any dependencies. Makes absolute perfect sense to me (no sarcasm).
But the average developer, writing most line-of-business CRUD systems, well, quite honestly they're not that strong to begin with. Most dependencies may be bug-infested garbage, but they're still higher quality compared to what most developers are capable of. Most companies are bottlenecked by the caliber of talent they can realistically recruit and retain; and internal development standards reflect the average caliber on the payroll.
So like most advice you read on the internet, remember, everybody is in different circumstances; and polar opposite pieces of advice can both be correct in different circumstances.
NIH is amazing as long as you are realistic about what you are taking ownership of.
For example, the cost of maintaining a bespoke web frontend "framework" that is specific to your problem domain is probably justifiable in many cases.
The same cannot be said for databases, game engines, web servers, cryptographic primitives, etc. If you have a problem so hard that no existing RDBMS or engine can support, you should seriously question the practicality of solving your problem at all. There's probably some theoretical constraint or view of the problem space you haven't discovered yet. Reorganizing the problem is a lot cheaper than reinventing the entire SQLite test suite from zero.
That RDBMS example of yours is fascinating.
There are plenty of RDBMS here (wikipedia lists some 100+ of them), there are plenty of problems most of them can not solve, but some of them do solve.
These people considered practicality of their solution and went forward doing the implementation.
i can think of two reasons for using a third-party dependency
1) a dependency on a third-party service provider that publishes the dependency. So long as that service provider is current, the dependency should be maintained 2) short-cut to code i don't want to write
I have no arguments with (1), there's a business reason and the lifecycles should match. However, I should still expect major version breaking changes in order to keep my application running. For (2), the wins are less clear, more dependenent on the perceived complexity of what I can avoid writing.
Taking on any kind of dependency means that someone else can dictate when I need to spend time updating and testing changes that don't add anything to my product. Taking on a third-party dependency is always taking on a responsibility to maintain a codebase or the risk of not doing so.
Theres a third one, when it comes to compliance and security tools, you don't want to build it even if you can because.
1. It is a liability
2. There is trust deficit during audit and other events. If audits are internal only sure you can build it but when it is 3rd party audited, auditors often know the product and familiar with the features.
> auditors often know the product and familiar with the features.
or what if you chose a dependency for which this auditor is unfamiliar with, and so it takes even longer (where as if you NIH, you'd have the full source and thus can give the auditors the materials to audit).
There is another solution which is to just retain the source for your dependencies and bring them in-house if needed. You still may need to abide by licenses, and many things shouldn’t need to rely on external deps, but I’ve seen a lot of wasted time that continues because we came up with our own way of doing something that others do better, and that can be difficult to get rid of because of unwarranted team pride or familiarity in the homegrown code or not wanting to trust the lone dev that says try this instead.
I would argue that 2 is the much more important reason. A dependency is worth it if it saves you a lot of time building the same thing. For example, if you wanted to ship a general computing device, you should probably run Linux on it rather than building your own OS from scratch, because you'll save literal years of work. Even if Linux had stopped being maintained, this would still be the right call compared to building your own (of course, it would be even better to choose some other OS still being actively maintained, if any is available).
This is where using boring languages like Go and Java come into their own: less frequent breaking changes.
There are some languages where communities aim for libraries to be stable for several months, and there’s others that think in terms of a decade or longer.
Most important reasons imo:
1) even though reality has proven us wrong time and time again, we can just not look at the dependency too closely and just act as if it's written and maintained by competent, caring people and is of highest quality. No worries!
2) in case shit hits the fan, let's assume worst case and there is a vuln in the dep and you get hacked... It's somebody else's fault! \o/
> 2) in case shit hits the fan... It's somebody else's fault!
A contrived example but, good luck explaining to the lawyers that openssl had this bug that caused all your customer data to leak and your company is being sued. If your motto for dependencies are “we can just not look at the dependency too closely and just act as if it's written and maintained by competent…” I’m reasonably sure someone is getting fired in that situation if it doesn’t sink the entire company.
Move fast and break things isn’t exactly a viable strategy in all industries.
Now as I said openssl was a contrived example but what if instead it was your ORM that didn’t use templated queries but rather just did string interpolation and there was an SQL injection attack. Considering this is still one if the most popular vulnerabilities on the web someone is messing stuff up somewhere and you are blindly like hoping it isn’t in the 10k lines of ORM library you pulled in instead.
> Reorganizing the problem is a lot cheaper than reinventing the entire SQLite test suite from zero.
Sure, but if you aren't happy with existing DBs you are probably wrong in thinking you need a general DB instead of a file layout on a filesystem.
It seems easy, but actually managing files as a database safely is actually very hard. Especially ensuring atomicity.
Portability is a problem as well.
You do not need an SQL database however, there are others available.
The technology is only part of the problem. If you aren't happy with any existing database, throwing even more technology at it is definitely not going to help - statistically speaking. You're up against millions of engineering hours in this space.
At some point, we need to get up from the computer and go talk to the business and customer about their problem in terms that don't involve anything more sophisticated than excel spreadsheets. There is definitely some aspect you missed if you think you need to build a fully custom technology vertical to get them to pay for a solution.
Of course, all of this advice is pointless if what you are doing is for amusement or as a hobby. I think a lot of arguments about technology would be diffused if we prefaced with our intended applications.
Using files instead of SQLite very much is NIH. Good luck not corrupting or losing your data.
Dependencies introduce risks, but not using them at all puts you at a competitive disadvantage against those who are using them and thus achieve faster development time and time to market.
What you need is a process to manage dependencies:
1) Only consider open-source dependencies.
2) Introducing new dependencies requires a review. Not just a code review on the pull request introducing it, but checking the license, estimating how much work it would be to rip out, auditing it for history of security vulnerabilities or bugs, whether it is live and still gets updates, how vibrant the community is, and so on.
3) If possible, review your dependencies for possible security issues. Doing this at scale is expensive and the economics of this are still an unsolved problem (I have my ideas: https://blog.majid.info/supply-chain-vetting/).
4) Do not adopt a dependency you are not willing and able to take over the maintenance of, or fork if necessary. At a minimum, it means you have built it from source at least once, not just used binary packages maintained by someone else.
5) Preemptively fork all your dependencies. People can and do delete their repos out of pique, see left-pad.
Both 4) and 5) are very important, but often forgotten.
Even for my own personal (small) projects, i've gotten hit with problems when i take an extended leave of absence and then return to a project, only to find my dependencies have become either completely outdated and unusable, or the repo was entirely deleted (with only a remote copy to build with).
I've since adopted the "fork" method; the dependency's source is forked (privately), and get the dependency built; this is recursively done with their dependencies (stopping at the language level libraries), just to get familiar with their build chain. Only then, will i feel good enough to add the dependency to my project (and use their publicly released artefacts from their publicly hosted library repository). It's a bit of work, but i think this effort is spent upfront, and removes future effort if the project lives long enough to see the ecosystem move/change directions (and you dont want to follow).
Sometimes i do find that source libraries (rather than pre-built binaries) to be better in this sense. I merely need to fork and link the sources into the project, rather than have to learn anything about build chains of dependencies...
This is amazingly insightful, thank you. I'm going to copy this to my own procedures files, on every dependency I use in the future.
The only problem I really see is very tall dependency trees, seen in e.g. the JavaScript world.
Being in the energy sector dependencies is something we intentionally avoid because we'd actually have to go through and review changes. What has helped this immensely is AI code assistance. One of the primary uses is to use it to write CLI tools for code and config generation in the tool chain. All of it is around making your life easier, without pulling external dependencies.
An example is that we create openapi docs with LLM's. Then we serve them with Go's net/http + FileServer, meaning that we never leave the standard library. Of course the LLM itself is a third party dependency, but when you use it to write CLI tools that then do the code generation, it never actually sees the code. That also means that the quality of those CLI tools are less important, because it is their output that matters.
It only goes so long of course. I'm not sure any of our front-end engineers would want to live in a world where they weren't allowed to use React, but then, their products are treated as though they are external anyway.
Anyway, it's a lot easier to make engineers stop using "quality of life" dependencies when you give them something that still makes their lives easy.
Doesn't the LLM spit out the code of the dependency it has been trained on? How is that any different from just forking the dependency and treating it like your own?
One advantage might be that you would only implement the (small) part of the library that you actually need, potentially avoiding bugs such as Log4Shell.
The risk of code duplication is a serious one though, and it would be nice if AI could automatically detect and clean this.
I guess we are in an annoying "in-between" state of things, with a pretty unsure future.
The benefit of using a library directly is your 3rd party library checks will warn you when a CVE is found in the version you are using. If an LLM creates the same functionality from copying a version of a library, you might be getting a version that already has known vulnerabilities, and you probably won't be pulling in any fixes/improvements in future until you find them.
Why not both?
Fork the dependency and use that, to have a stable non-changing base which you use. And additionally, make the original project a dependency but don't actually use it. This way you'll get CVE information from your tooling.
You would (probably) be avoiding commonly known exploits while introducing subtle AI-induced bugs like printing logs out of order or not locking/ordering resources properly.
Oh yes, I fully agree. It will be quite horrible if we don't handle things properly.
People love to claim this, especially on this site, but in my experience it's the opposite. Many people like writing new code and will do it even when it's detrimental to the business, but 9 times out of 10 even using a "bad" dependency is far more effective than writing in-house.
Dependencies are a double-edged sword.
Most vocal people work on "disposable" end of software. It's cheaper for software giants to just throw engineer-hours at rewriting piece of code that has fallen into organizational obscurity than to maintain (hehe) maintainability. There is usually no sense for small branding/webshop factories to churn out high quality, maintainable code.
However, I suggest you revisit the reason why the dreaded "enterprise patterns" exist in the first place. The main reason to use these architectures is so that five years down the line, when documentation is badly outdated, there is no organizational memory left behind that component, original developers have transitioned to either different teams/departments or left the company altogether, the component is still well isolated, analyzable and possible to work on.
Introduction of external dependency(-ies) carry two inherent business risks: either support for dependency will be dropped, meaning you will have to either absorb maintenance burden yourself or switch dependencies, or it will introduce breaking changes, meaning you have to stick to unmaintained version or update your product code. Both situations will eventually impact your feature flow, whatever it is.
Compromise between trunk and leaf (push of dependencies vs pull of deps) is intrinsic to modularization and is always there, however with internal components this compromise is internal, rather external.
> Many people like writing new code and will do it even when it's detrimental to the business, but 9 times out of 10 even using a "bad" dependency is far more effective than writing in-house.
If you are a SaaS company - most probably yes as it is the short-term outcome that is determinate of business success. However, if you work in any industry with safety and support requirements on software or put the burden of long term support on yourself, long-term horizon is more indicative of business success.
Remember, writing new code is almost never the bottleneck in any mature-ish organization.
> five years down the line, when documentation is badly outdated, there is no organizational memory left behind that component, original developers have transitioned to either different teams/departments or left the company altogether, the component is still well isolated, analyzable and possible to work on.
This will be far more true for an external dependency - even one that's no longer actively developed - than for an internally developed component, IME. Just at the most basic level an external dependency has to have some level of documentation and at least be usable by someone other than the original author to even get picked up.
> Introduction of external dependency(-ies) carry two inherent business risks: either support for dependency will be dropped, meaning you will have to either absorb maintenance burden yourself or switch dependencies, or it will introduce breaking changes, meaning you have to stick to unmaintained version or update your product code. Both situations will eventually impact your feature flow, whatever it is.
Sure, you need stay up to date, potentially even take over maintenance yourself, or accept the risk of not doing so, and none of that is free. But writing an internal implementation basically puts you in the worst-case scenario by default - you have to maintain the code yourself, and it's probably less maintainable than an external codebase.
> But writing an internal implementation basically puts you in the worst-case scenario by default - you have to maintain the code yourself, and it's probably less maintainable than an external codebase
This very much depends on the type of dependency we are talking about. if it's externally-facing, sure, you'll have to maintain a suitable security posture and keep up with spec/requirement changes coming in from the outside world.
If it's an internal-facing dependency, it's reasonably likely that you never have to touch it again once the problem is solved. When I worked at Amazon we had internal libraries that hadn't really changed in the past decade, because they were designed to operate inside controlled environments that insulated them from the changing world outside.
Would you say Log4j is an internal or an external dependency?
External, unfortunately. A library that only wrote log files would be internal, but log4j is one of those open-source solutions that has fallen prey to the kitchen-sink fallacy - bundling network transport and service discovery into your logging library creates a massive attack surface that isn't strictly related to the library's stated function.
Can I ask how seriously your company takes security vulnerabilities and licensing? I used to have a pretty lax attitude toward dependencies, but that changed significantly when I joined a company that takes those things very seriously.
I've worked with many companies over the years. License management should be automatic or mostly-automatic if you're taking dependency management seriously (particularly these days where so many projects use well-known open-source licenses and everything is machine-readable metadata), and I've never seen a single in-house codebase (even at e.g. F500 financial institutions) that took security more seriously than your average open-source library.
They could've at least added left-pad as an example of a bad dependency, instead of the cop-out
Completely agree. It's one of the most important skills to know which dependency is good and which is bad.
My two cents. If a dependency is paid, than it is usually bad. Because the company providing that dependency has an incentive to lock you in.
As another point, "dependency minimalism" is a nice name for it. https://x.com/VitalikButerin/status/1880324753170256005
Developers would say this and then deploy to AWS Lambda or Vercel with a straight face
As well, paid dependencies usually only have one source of support, and when the company goes under or stops supporting the product you are in rough seas.
Given very few companies last forever, you have to assess if the trajectory of your project would be impacted by being coupled to their ability to support you.
> when the company goes under or stops supporting the product
Or, even worse, gets acquired by someone like Salesforce
Exactly. That's another point
> My two cents. If a dependency is paid, than it is usually bad. Because the company providing that dependency has an incentive to lock you in.
Vendor lock-in is a risk for both purchased components and FOSS ones where the organization is unwilling to assume maintenance. The onus is on the team incorporating third-party component(s) to manage their risk, identify alternatives as appropriate, and modularize their solutions.
If a dependency is paid and it is bad, then maybe you just aren't paying enough for it.
If my code has a dependency then I want there to be people whose feel it is their job to support it.
Either there have to be enough people who are paid to support it, or there have to be enough people whose self-worth and identity is so wrapped up in the code that they take it as a point of honor to support it.
I don't need a company that's likely to abandon a code product and leave me hanging. I also don't need a free software author who says "bugs are not my problem - you have the source, go fix it yourself." If those are my choices, I'd rather write it myself.
I've experienced some bad paid dependencies forced on us by a non-engineering team. I've had a few good experiences with "open core" kinds of dependencies that are widely used by the community, e.g. sidekiq, and therefore less likely to suddenly vanish one day as they would almost certainly get forked and maintained by others in the same boat.
The upside of paying for something is that, assuming the owner or company is stable, I don't have to worry about some unpaid solo maintainer burning out and never logging back in.
> assuming the owner or company is stable
and continues to be stable for the lifetime of your product.
https://opensourcemaintenancefee.org/ uses payments as an incentive to keep projects going, so dependencies can be updated. .NET Rocks! interviewed them https://www.dotnetrocks.com/details/1948
The author is from New Zealand. You have to understand the Number 8 wire[1] mindset the has taken root there to put this in context.
[1]https://en.wikipedia.org/wiki/Number_8_wire#:~:text=Accordin...
I think just about every experienced developer I know - most of whom are not from NZ - would agree with this article. We've all been burned by using the wrong dependency for the job at some point, or the pain of having to deal breaking changes when upgrading a library that really wasn't necessary to use in the first place.
To push back on this article's thrust, those are actual issues that can occur with dependencies. It's just that they usually don't, and most dependencies are harmless, small, and transparent, so articles like this and people complaining about dependencies are almost always overstating their case.
And therefore, adopting a "zero dependencies" policy is absolutely an overreaction.
I'd argue harmless, small and transparent dependencies are the easiest to avoid. The extreme of it "is_even", but any library that could fit in 200 lines should probably be owned code.
Where the article hits is critical libraries that you heavily rely on and that your code is slowly formatted around. I'm thinking specific data parsers, fancy query brokers, time management libraries etc.
I don't think the author is actually arguing for zero dependencies here. While they do quote a "zero dependency policy" in one open source project as being "instructive", they also propose a "framework for evaluating dependencies" - which of course anyone advocating for zero dependencies would have no need for.
As a developer from NZ I agree
Purely vibes but as a Kiwi I feel like Number 8 Wire mentality has been dead for at least 20 years now.
TIL. I'm reminded of the Australian "She'll buff out, mate" meme.
Either way you wrap each thing that acts as a dependency, even if it's internal. I treat dependencies another team in my company delivers the same as any other third party dependency. Never use the classes but always just wrap around them in a service or component.
When my 'task' is to publish things on a Kafka bus I create a publishing service that takes in an object that I control, only inside that service is there actual talk about the dependency and preferably even that's a bit wrapped. It's easy to go too far with this but a little bit of wrapping keeps you safe.
You can tell it is written by C programmer because they think installing dependencies is hard.
if it's not hard why do we have docker ?
If dependency == library, it is hard in C or Python, but it isn’t in Rust or Java.
You could easily run a Rust program on bare Linux, but using Docker might simplify the version management story.
But even if you use a language with a sane dependency management story, you probably depend on PostgreSQL, and running it in Docker can be easier than installing it on the system (since distros really like to mess with the PostgreSQL configuration).
Can Rust program link with Shared C library? If yes then I would argue they have the same problem with dependency version.
The ubiquity criteria also informs scaling - for example, if a tooling or backend dependency is being production deployed by a company at a scale of 10^2 or 10^3 times your use case, you're much less likely to hit insurmountable performance issues until you get to similar scale.
They're also much more likely to find/fix bugs affecting that scale earlier than you do, and many companies are motivated to upstream those fixes.
Their libraries sometimes don’t even work for low scale though.
The protocol buffer compiler for Swift actually at one point crashed on unexpected fields. Defeating the entire point of protos. The issue happens when only it tries to deserialize from JSON, which I guess none of them actually use due to large scale.
To clarify, I'm not thinking of code/libraries written by a huge company, more about open source code that has been scaled far beyond your deployment size by someone/anyone else.
Also, if you're using some feature that isn't regularly exercised (like your Swift protobuf example), it's probably doesn't have the variety of use to be covered by Hyrum's Law (see https://www.hyrumslaw.com ), which is definitely a different aspect of the Ubiquity criteria.
I don't agree.
Some of the worst bugs I've hit have been in libraries written by very large companies, supposedly "the best and brightest" (Meta, Google, Microsoft, in that order) but it takes forever for them to respond to issues.
Some issues go on for years. I've spent months working in issue trackers discussing PRs and whether or not we can convince some rules-lawyer it doesn't warrant a spec change (HINT: you never convince him), chasing that "it's faster/cheaper/easier to use a 3rd party package" dragon, only to eventually give up, write my own solution, fix the core issue, and do it in less time than I've already wasted. And probably improve overall performance while I'm at it.
I think a lot of it depends on the exact field you're working in. If you're working in anything sniffing close to consulting, work is a constant deluge of cockamamie requests from clients who don't understand they aren't paying you enough to throw together a PhD research thesis in a month with a revolving crew of junior developers you can't grow and keep because the consulting firm won't hire enough people with any amount of experience to give the small handful of senior developers they keep dragging into every God damned meeting in the building so we can have a chance to come up for air every once in a while.
I'm at a point where I have enough confidence in my skills as a software developer that I know pretty much for certain whether I can develop a given solution. There are very few I can't. I'm not going to try to train an AI model on my own. I won't try to make my own browser. A relational database with all the ACID trimmings, no.
But I'll definitely bang out my own agentic system running off of local inference engines. I'll for sure implement an offline HTML rendering engine for the specific reports I'm trying export to an image. I'll build a fugging graph database from scratch because apparently nobody can make one that I can convince anyone to pay for (budget: $0) that doesn't shit the bed once a week.
Most of the time, the clients say they want innovation, but what they really want is risk reduction. They wouldn't hire a consultant if it wasn't about risk, they'd put together their own team and drive forward. Being broadly capable and well-studied, while I may not be quite as fast at building that graph database example as an expert in Neo4j or whatever, we're also not getting that person and have no idea when they are showing up. If they even exist in the company, they're busy on other projects in a completely different business unit (probably not even doing that, probably stuck in meetings).
But I know I can get it done in a way that fits the schedule. Spending time reading the worst documentation known to mankind (Google's) because some drive-by said they did this once and used a Google product to do it is probably going to end in wasting you a lot of time only to realize that said drive-by didn't spend long enough actually listening to the meeting to understand the nuance of the problem. Time that you could have spent building your own and hitting your schedule with certainty.
Sorry, it's late and I'm tired from a full quarter of 12 hour days trying to rescue a project that the previous team did nothing on for the previous quarter because... IDK why. No adults in the room.
Hope you're getting paid well bro
Don't go into consulting. If you do, it's impossible to get out. No product-oriented companies will ever hire you. Wish someone told me that 20 years ago.
> Sometimes it's easier to write it yourself than install the dependency...
A few examples wouldn't hurt...
> Their breaking changes can trigger expensive re-writes of your own code to handle a new interface.
Don't update them or NIH-update them just like you'd do with the original NIH code. Still a net win in saving the time for the initial coding
> You need to ensure they end up on your clients' machine.
Vendoring exists?
Database drivers, crypto etc im always "installing", but for 97% of other stuff i tend to roll my own. And when i dont want to reinvent the wheel, i take time to vet the depency.
How many LOC?
Does it has deps of its own?
Is it a maintained library?
Can i fork/fix bugs?
> Sometimes it's easier to write it yourself than install the dependency
This is definitely true, but only really relevant if you're either a solo dev or in for the long haul, so to speak. And it'll work way better for your use case too.
The problem is working with others. Others coming in are more apt to try to learn a known or semi-known system than "Dave who used to work here's crazy thoughts." Especially in this market where longevity is rare, and company specific knowledge is useless as resume fodder.
So from a dev standpoint it absolutely makes sense. From a business standpoint, probably markedly less so.
Figuring out how to get others to install a dependency is even worse!
Dozens of comments and no mention of DIP yet.
You should absolutely use dependencies, and your should be able to tear them out as soon as you don't like them anymore.
Including the dependency can even be part of your NIH if you're so inclined. Instantiate two copies of your code (dep/NIH) and test that they behave the same.
"One technique for making software more robust is to minimize what your software depends on – the less that can go wrong, the less that will go wrong. Minimizing dependencies is more nuanced than just not depending on System X or System Y, but also includes minimizing dependencies on features of systems you are using."
From http://nathanmarz.com/blog/principles-of-software-engineerin...
And here I thought I was going to read a novel defense of the British healthcare system.
NHS?
Haha, I was doubly confused! You’re right, I should have said National Institutes of Health.
Everybody loves NIH until what gets invented is dogshit. Seriously, it's easy to romanticise the idea but wait until you have to work on something that evolved over a decade without any refactoring or documentation, and worse, the author is a lifer who couldn't ever imagine that their code could use improvement. I'll migrate to whatever silly API react router has cooked up this year over living that again, thanks.
What's NIH
Short for Not Invented Here syndrome... when developers ignore existing libraries or solutions and implement their own version. Usually this is meant negatively, i.e. developers thinking they -need- to implement their own version, at great expense, when they could have just used an off-the-shelf library. However this article is positing that writing your own code is a good thing in some cases.
Not Invented Here
National Institute of Health
This is what I thought too.
I feel this post. I maintain software in Android, iOS, Python, embedded C, and of late adding Elixir to the mix.
Some in the community will wine about lack of Elixir ecosystem. But often, I’m fine just putting together what I need. It varies. I don’t want to do my own bandit sever or Phoenix live view stuff. But MQTT, no probs.
Often I find that the need for libraries can be an indictment of complexity against the problem space. Take Blootooth as an example. What a huge ball. Though at the end of the day, I’ve done straight to HCI python implementations that for some things are better than the “libraries” available. Don’t get me started on the “hold your hand” gpio libraries for the raspberry pi.
One type of dependency, that I kind of miss is the “copy this code to your project” dependency. You can take complete ownership right away. Whereas with more crafted dependencies, the dependency surface is more than just an algorithm writ large in code, but a whole philosophy/metaphor that may not be an exact fit for your own project.
A good question I ask myself is: Can I vendor it in? If I cannot, that’s usually something standard, but also a complex domain (crypto, network protocol, platform framework,…). Anything else, I assume it’s only a short term gain, so I make sure to explore the library and get a good understanding of its internals.
I am failing to find it, but Simon Willison had a post where he vendored some small (<1000 line) blob of code instead of adding it as a dependency. The trick was that he had some Github action(?) automation which would automatically check if the vendored version ever fell out of sync with upstream. Get the benefits of vendoring while minimizing your actual support burden.
Only a realistic strategy for small bits of code, but I have considered do the same for a few utility libraries which are a focused niche, but subtle enough that you would rather outsource the complexity.
Vendoring is somewhat an answer, but not always. I have some python code, vitrualenv is an obvious solution. However, older modules depend on older python3.9 behavior. And python 3.9 EOL is October 2025.
Vendoring is taking the responsibility to maintain it as part of your codebase instead of relying on third party, not just capturing a snapshot.
What's the difference vs a state where you wrote the code many years ago that defends on 3.9?
Note that the ECMA-48 escape sequences themselves are the good dependency, not abstractions that hide them, like your tput command or curses or what have you.
I think the OP's article is saying that they are both good and bad depending on the context. You need to evaluate these things on a case by case basis. Writing a little TUI for yourself or your team? Sure, go nuts with escape codes. Making a well supported tool that runs on any developer's machine? Maybe consider curses, there are probably more edge cases than you think.
Trying to find product market fit as a startup? Who gives a duck, find some customers before you care too much about the tech =P cattle not pets etc
Dan Luu quoting Joel Spolsky:
“Find the dependencies — and eliminate them.” When you're working on a really, really good team with great programmers, everybody else's code, frankly, is bug-infested garbage, and nobody else knows how to ship on time.
[..] We didn't think that everyone else was producing garbage but, we also didn't assume that we couldn't produce something comparable to what we could buy for a tenth of the cost.
After many years using libraries without any sort of vetting process, I completely agree in principle that most code out there is bug infested garbage. One dependency had a concurrency bug so severe it almost cost our company our biggest customer early in our journey. We forked and completely rewrote the library as after looking at the source it was clear they didn’t have nearly as much care with their code as we did. This was the worst case but we faced many bugs in widely used libraries. We can’t replace them all as time is short but if we could we would probably replace most of them.
With AI NIH will be the new normal.
I wish. As far as I can tell the Venn diagram of people building piles of shit with NPM and people building piles of shit with LLMs seems pretty close to a circle.
Why not ask AI how to use <appropriate library / framework> to implement <thing>?
Doesn’t seem like you can blame NIH on AI more than other motivations for NIH.
Edit to add: If AI makes NIH easier, then it implies that AI is good at solving problems, and speaks to AI’s credit.
From me using Claude Code, without a proper system prompt, Claude generates code rather than using a library, this week e.g. command line parameter and flag parsing. The difficulty is where the tipping point is to use a library, it can't be https://www.npmjs.com/package/is-even