Whoa, I didn't know about this:
# Run with restricted file system access
node --experimental-permission \
--allow-fs-read=./data --allow-fs-write=./logs app.js
# Network restrictions
node --experimental-permission \
--allow-net=api.example.com app.js
Looks like they were inspired by Deno. That's an excellent feature. https://docs.deno.com/runtime/fundamentals/security/#permiss...I very much dislike such features in a runtime or app.
The "proper" place to solve this, is in the OS. Where it has been solved, including all the inevitable corner cases, already.
Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
How would you do this in a native fashion? I mean I believe you (chroot jail I think it was?), but not everyone runs on *nix systems, and perhaps more importantly, not all Node developers know or want to know much about the underlying operating system. Which is to their detriment, of course, but a lot of people are "stuck" in their ecosystem. This is arguably even worse in the Java ecosystem, but it's considered a selling point (write once run anywhere on the JVM, etc).
> but not everyone runs on *nix systems
Meaning Windows? It also has file system permissons on an OS level that are well-tested and reliable.
> not all Node developers know or want to know much about the underlying operating system
Thing is, they are likely to not feel up for understanding this feature either, nor write their code to play well with it.
And if they at some point do want to take system permissions seriously, they'll find it infinitely easier to work with the OS.
> How would you do this in a native fashion?
I dunno how GP would do it, but I run a service (web app written in Go) under a specific user and lock-down what that user can read and write on the FS.
For networking, though, that's a different issue.
How many apps do you think has properly set user and access rights only to what they need? In production? If even that percentage was high, how about developers machines, people that run some node scripts which might import whoever knows what? It is possible to have it running safely, but I doubt it's a high percentage of people. Feature like this can increase that percentage
> What problem does it solve that hasn't been solved by other people?
nothing. Except for "portability" arguments perhaps.
Java has had security managers and access restrictions built in but it never worked very well (and is quite cumbersome to use in practice). And there's been lots of bypasses over the years, and patch work fixes etc.
Tbh, the OS is the only real security you could trust, as it's as low a level as any application would typically go (unless you end up in driver/kernal space, like those anti-virus/anti-cheat/crowdstrike apps).
But platform vendors always want to NIH and make their platform slightly easier and still present the similar level of security.
I wouldn't trust it to be done right. It's like a bank trusting that all their customers will do the right thing. If you want MAC (as opposed to DAC), do it in the kernel like it's supposed to be; use apparmor or selinux. And both of those methods will allow you to control way more than just which files you can read / write.
Yeah but you see, this requires to be deployed along side the application somehow with the help of the ops team. While changing the command line is under control of the application developer.
> I wouldn't trust it to be done right.
I don't understand this sort of complaint. Would you prefer that they didn't worked on this support ever? Exactly what's your point? Airing trust issues?
Node allows native addons in packages via the N-API so any native module aren't restricted by those permissions. Deno deals with this via --allow-ffi but these experimental Node permissions have nothing to disable the N-API, they just restrict the Node standard library.
> Node allows native addons in packages via the N-API so any native module aren't restricted by those permissions. (...) Node permissions (...) just restrict the Node standard library.
So what? That's clearly laid out in Node's documentation.
https://nodejs.org/api/permissions.html#file-system-permissi...
What point do you think you're making?
What is the point of a permissions system that can be trivially bypassed?
Can't seem to find an official docs link for allow-net, only blog posts.
https://github.com/nodejs/node/pull/58517 - I think the `semver-major` and the timing mean that it might not be available until v25, around October
Path restrictions look simple, but they're very difficult to implement correctly.
PHP used to have (actually, still has) an "open_basedir" setting to restrict where a script could read or write, but people found out a number of ways to bypass that using symlinks and other shenanigans. It took a while for the devs to fix the known loopholes. Looks like node has been going through a similar process in the last couple of years.
Similarly, I won't be surprised if someone can use DNS tricks to bypass --allow-net restrictions in some way. Probably not worth a vulnerability in its own right, but it could be used as one of the steps in a targeted attack. So don't trust it too much, and always practice defense in depth!
Last time a major runtime tried implementing such restrictions on VM level, it was .NET - and it took that idea from Java, which did it only 5 years earlier.
In both Java and .NET VMs today, this entire facility is deprecated because they couldn't make it secure enough.
The killer upgrade here isn’t ESM. It’s Node baking fetch + AbortController into core. Dropping axios/node-fetch trimmed my Lambda bundle and shaved about 100 ms off cold-start latency. If you’re still npm i axios out of habit, 2025 Node is your cue to drop the training wheels.
16 years after launch, the JS runtime centered around network requests now supports network requests out of the box.
Obviously it supported network requests, the fetch api didn't even exist back then, and XMLHttpRequest which was the standard at the time is insane.
Insane but worked well. At least we could get download progress.
You can get download progress with fetch. You can't get upload progress.
Edit: Actually, you can even get upload progress, but the implementation seems fraught due to scant documentation. You may be better off using XMLHttpRequest for that. I'm going to try a simple implementation now. This has piqued my curiosity.
It took me a couple hours, but I got it working for both uploads and downloads with a nice progress bar. My uploadFile method is about 40 lines of formatted code, and my downloadFile method is about 28 lines. It's pretty simple once you figure it out!
Note that a key detail is that your server (and any intermediate servers, such as a reverse-proxy) must support HTTP/2 or QUIC. I spent much more time on that than the frontend code. In 2025, this isn't a problem for any modern client and hasn't been for a few years. However, that may not be true for your backend depending on how mature your codebase is. For example, Express doesn't support http/2 without an external dependency. After fussing with it for a bit I threw it out and just used Fastify instead. So I understand any apprehension/reservations there.
Overall, I'm pretty satisfied knowing that fetch has wide support for easy progress tracking.
Sniped
Nerd
I never really liked the syntax of fetch and the need to await for the response.json, implementing additional error handling -
async function fetchDataWithAxios() {
try {
const response = await axios.get('https://jsonplaceholder.typicode.com/posts/1');
console.log('Axios Data:', response.data);
} catch (error) {
console.error('Axios Error:', error);
}
}
async function fetchDataWithFetch() {
try {
const response = await fetch('https://jsonplaceholder.typicode.com/posts/1');
if (!response.ok) { // Check if the HTTP status is in the 200-299 range
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json(); // Parse the JSON response
console.log('Fetch Data:', data);
} catch (error) {
console.error('Fetch Error:', error);
}
}
While true, in practice you'd only write this code once as a utility function; compare two extra bits of code in your own utility function vs loading 36 kB worth of JS.
Yeah, that's the classic bundle size vs DX trade-off. Fetch definitely requires more boilerplate. The manual response.ok check and double await is annoying. For Lambda where I'm optimizing for cold starts, I'll deal with it, but for regular app dev where bundle size matters less, axios's cleaner API probably wins for me.
Agreed, but I think that in every project I've done I've put at least a minimal wrapper function around axios or fetch - so adding a teeny bit more to make fetch nicer feels like tomayto-tomahto to me.
You’re shooting yourself in the foot if you put naked fetch calls all over the place in your own client SDK though. Or at least going to extra trouble for no benefit
I somehow don't get your point.
The following seems cleaner than either of your examples. But I'm sure I've missed the point.
fetch(url).then(r=>r.ok ? r.json() : Promise.reject(r.status))
.then(
j=>console.log('Fetch Data:', j),
e=>console.log('Fetch Error:', e)
);
I share this at the risk of embarrassing myself in the hope of being educated.Depends on your definition of clean, I consider this to be "clever" code, which is harder to read at a glance.
You'd probably put the code that runs the request in a utility function, so the call site would be `await myFetchFunction(params)`, as simple as it gets. Since it's hidden, there's no need for the implementation of myFetchFunction to be super clever or compact; prefer readability and don't be afraid of code length.
I usually write it like:
const data = (await fetch(url)).then(r => r.json())
But it's very easy obviously to wrap the syntax into whatever ergonomics you like.Honestly it feels like yak shaving at this point; few people would write low-level code like this very often. If you connect with one API, chances are all responses are JSON so you'd have a utility function for all requests to that API.
Code doesn't need to be concise, it needs to be clear. Especially back-end code where code size isn't as important as on the web. It's still somewhat important if you run things on a serverless platform, but it's more important then to manage your dependencies than your own LOC count.
why not?
const data = await (await fetch(url)).json()
That's very concise. Still, the double await remains weird. Why is that necessary?
You don't need all those parens:
await fetch(url).then(r => r.json())
There has to be something wrong with a tech stack (Node + Lambda) that adds 100ms latency for some requests, just to gain the capability [1] to send out HTTP requests within an environment that almost entirely communicates via HTTP requests.
[1] convenient capability - otherwise you'd use XMLHttpRequest
1. This is not 100ms latency for requests. It's 100ms latency for the init of a process that loads this code. And this was specifically in the context of a Lambda function that may only have 128MB RAM and like 0.25vCPU. A hello world app written in Java that has zero imports and just prints to stdout would have higher init latency than this.
2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.
Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
Do you need an LLM for this? I've made my own in-house fork of a Java library without any LLM help. I needed apache.poi's excel handler to stream, which poi only supports in one direction. Someone had written a poi-compatible library that streamed in the other direction, but it had dependencies incompatible with mine. So I made my own fork with dependencies that worked for me. That got me out of mvn dependency hell.
Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.
ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.
Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.
nvm I'm dumb lol, `ts-rest` does support express v5: https://github.com/ts-rest/ts-rest/pull/786. Don't listen to my misinformation above!!
I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.
Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?
I always try to throw schema validation of some kind in API calls for any codebase I really need to be reliable.
For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.
For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.
How do you supply the schema on the other side?
I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.
There are a few ways, but I believe SSOT (single source of truth) is key, as others basically said. Some ways:
1. Shared TypeScript types
2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety
3. RTK (redux toolkit) query style: codegen'd frontend client
I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.
Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.
What is a validation quirk that would happen when using server side Zod schemas that somehow doesn’t happen with a codegened client?
I'll almost always lean on separate packages for any shared logic like that (at least if I can use the same language on both ends).
For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.
It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.
Write them both in TypeScript and have both the request and response shapes defined as schemas for each API endpoint.
The server validates request bodies and produces responses that match the type signature of the response schema.
The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.
It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.
This will break old clients. Having a deployment stategy taking that into account is important.
Effect provides a pretty good engine for compile-time schema validation that can be composed with various fetching and processing pipelines, with sensible error handling for cases when external data fails to comply with the schema or when network request fails.
The schema definition is more efficient than writing input validation from scratch anyway so it’s completely win/win unless you want to throw caution to the wind and not do any validation
Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.
For what it's worth, happy user of ts-rest here. Best solution I landed upon so far.
As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.
Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.
I'm a small-ish time author, but it was really painful for a while since we were all dual-publishing in CJS and ESM, which was a mess. At some point some prominent authors decided to go full-ESM, and basically many of us followed suit.
The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.
Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.
What a dual-publishing nightmare. Someone had to break the stalemate first. 90% size reduction is solid even if Node bundle size isn't as critical. The streams thing sounds messy, though. Two incompatible streaming standards in the same runtime is bound to create headaches.
The fact that CJS/ESM compatibility issues are going away indicates it was always a design choice and never a technical limitation (most CJS format code can consume ESM and vice versa). So much lost time to this problem.
It was neither a design choice nor a technical limitation. It was a big complicated thing which necessarily involved fiddly internal work and coordination between relatively isolated groups. It got done when someone (Joyee Cheung) actually made the fairly heroic effort to push through all of that.
Joyee has a nice post going into details. Reading this gives a much more accurate picture of why things do and don't happen in big projects like Node: https://joyeecheung.github.io/blog/2024/03/18/require-esm-in...
You're right. It wasn't a design choice or technical limitation, but a troubling third thing: certain contributors consistently spreading misinformation about ESM being inherently async (when it's only conditionally async), and creating a hostile environment that “drove contributors away” from ESM work - as the implementer themselves described.
Today, no one will defend ERR_REQUIRE_ESM as good design, but it persisted for 5 years despite working solutions since 2019. The systematic misinformation in docs and discussions combined with the chilling of conversations suggests coordinated resistance (“offline conversations”). I suspect the real reason for why “things do and don’t happen” is competition from Bun/Deno.
I maintain a library also, and the shift to ESM was incredibly painful, because you still have to ship CJS, only now you have work out how to write the code in a way that can be bundled either way, can be tested, etc etc.
It was a pain, but rollup can export both if you write the source in esm. The part I find most annoying is exporting the typescript types. There's no tree-shaking for that!
For simple projects you needed now to add rollup or other build system that didn't have or need it before. For complex systems (with non-trivial exports), now you have a mess since it wouldn't work straight away.
Now with ESM if you write plain JS it works again. If you use Bun, it also works with TS straight away.
node fetch is WAY better than axios (easier to use/understand, simpler); didn't really know people were still using axios
You still see axios used in amateur tutorials and stuff on dev.to and similar sites. There’s also a lot of legacy out there.
AI is going to bring that back like an 80s disco playing Wham. If you gonna do it do it wrong...
I've had Claude decide to replace my existing fetch-based API calls with Axios (not installed or present at all in the project), apropos of nothing during an unrelated change.
I had Gemini correct my code using Google's new LLM API to use the old one.
hahaha, I see it all the time in my responses. I immediately reject.
I do miss the axios extensions tho, it was very easy to add rate-limits, throttling, retry strategies, cache, logging ..
You can obviously do that with fetch but it is more fragmented and more boilerplate
Totally get that! I think it depends on your context. For Lambda where every KB and millisecond counts, native fetch wins, but for a full app where you need robust HTTP handling, the axios plugin ecosystem was honestly pretty nice. The fragmentation with fetch libraries is real. You end up evaluating 5 different retry packages instead of just grabbing axios-retry.
Sounds like there's space for an axios-like library built on top of fetch.
Like axios can do it if you specify the fetch backend, it just won't do the .json() asynchronously.
I think that's the sweet spot. Native fetch performance with axios-style conveniences. Some libraries are moving in that direction, but nothing's really nailed it yet. The challenge is probably keeping it lightweight while still solving the evaluating 5 retry packages problem.
Is this what you're looking for? https://www.npmjs.com/package/ky
I haven't used it but the weekly download count seems robust.
Ky is definitely one of the libraries moving in that direction. Good adoption based on those download numbers, but I think the ecosystem is still a bit fragmented. You've got ky, ofetch, wretch, etc. all solving similar problems. But yeah, ky is probably the strongest contender right now, in my opinion.
Right?! I think a lot of devs got stuck in the axios habit from before Node 18 when fetch wasn't built-in. Plus axios has that batteries included feel with interceptors, auto-JSON parsing, etc. But for most use cases, native fetch + a few lines of wrapper code beats dragging in a whole dependency.
This is all very good news. I just got an alert about a vulnerability in a dependency of axios (it's an older project). Getting rid of these dependencies is a much more attractive solution than merely upgrading them.
isn't upgrading node going to ba bigger challenge? (if you're on a node version that's no longer receiving maintenance)
axios got discontinued years ago I thought, nobody should still be using it!
No? Its last update was 12 days ago
It kills me that I keep seeing axios being used instead of fetch, it is like people don't care, copy-paste existing projects as starting point and that is it.
Maybe I'm wrong and it's been updated but doesn't axios support progress indicators out of the box and just generally cleaner?
That's said there are npm packages that are ridiculously obsolete and overused.
It has always astonished me that platforms did not have first class, native "http client" support. Pretty much every project in the past 20 years has needed such a thing.
Also, "fetch" is lousy naming considering most API calls are POST.
“Most” is doing a lot of heavy lifting here. I use plenty of APIs that are GET
That's a category error. Fetch is just refers to making a request. POST is the method or the HTTP verb used when making the request. If you're really keen, you could roll your own
const post = (url) => fetch(url, {method:"POST"})
I read this as OP commenting on the double meaning of the category. In English, “fetch” is a synonym of “GET”, so it’s silly that “fetch” as a category is independent of the HTTP method
That makes sense.
This has been the case for quite awhile, most of the things in this article aren’t brand new
Undici in particular is very exciting as a built-in request library, https://undici.nodejs.org
Undici is solid. Being the engine behind Node's fetch is huge. The performance gains are real and having it baked into core means no more dependency debates. Plus, it's got some great advanced features (connection pooling, streams) if you need to drop down from the fetch API. Best of both worlds.
It's into core but not exposed to users directly. you still need to install the npm module if you want to use it, which is required if you need for example to go through an outgoing proxy in your production environment
Those... are not mutually exclusive as killer upgrade. No longer having to use a nonsense CJS syntax is absolutely also a huge deal.
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
Especially since it seems perfectly possible to support both simultaneously. Bun does it. If there's an edge case, I still haven't hit it.
axios works for both node and browser in production code, not sure if fetch can do as much as axios in browser though
You no longer need to install chalk or picocolors either, you can now style text yourself:
`const { styleText } = require('node:util');`
Docs: https://nodejs.org/api/util.html#utilstyletextformat-text-op...
I never needed those. I would just have an application wide object property like:
text: {
angry : "\u001b[1m\u001b[31m",
blue : "\u001b[34m",
bold : "\u001b[1m",
boldLine : "\u001b[1m\u001b[4m",
clear : "\u001b[24m\u001b[22m",
cyan : "\u001b[36m",
green : "\u001b[32m",
noColor : "\u001b[39m",
none : "\u001b[0m",
purple : "\u001b[35m",
red : "\u001b[31m",
underline: "\u001b[4m",
yellow : "\u001b[33m"
}
And then you can call that directly like: `${vars.text.green}whatever${vars.text.none}`;
This is the problem with people trying to be clever. Now you output escape sequences regardless of terminal setting.
Using a library which handles that (an a thousand other quirks) makes much more sense
It depends on the audience / environment where your app is used. Public, a library is better. Internal / defined company environment, you don't need extra dependencies (but only when it comes to such simple solutions, that could be replaced easy with a lib).
I think the widely-implemented terminal escape sequences are well-known at this point, but I don't see why I'd want to copy this into every project.
Also, I'm guessing if I pipe your logs to a file you'll still write escapes into it? Why not just make life easier?
Arguably, using a library is also "copy it into every project".
that's needlessly pedantic. the GP is noting that it's built into node's standard library, which might discourage you from installing a library or copying a table of ansi escapes.
I have a "ascii.txt" file ready to copy/paste the "book emoji" block chars to prepend my logs. It makes logs less noisy. HN can't display them, so I'll have to link to page w/ them: https://www.piliapp.com/emojis/books/
Nice browser add-on https://johannhof.github.io/emoji-helper/
Why would you call that "ascii.txt"?
Caz it has more than those book emojis. It makes writing geometric code docstrings easier. Here's the rest of it (HN doesn't format it good, try copy/paste it).
cjk→⋰⋱| | ← cjk space btw | |
thinsp | |
deg°
⋯ …
‾⎻⎼⎽ lines
_ light lines
⏤ wide lines
↕
∧∨
┌────┬────┐
│ │ ⋱ ⎸ ← left bar, right bar: ⎹
└────┴────┘
⊃⊂ ⊐≣⊏
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯====›‥‥‥‥
◁ ◿ ◺ ◻ ◸
Λ
╱│╲
╱ │ ╲
──┼──
╲ │ ╱
╲│╱
V
┌ ─┏━━┳━━━━━━━┓
│ ┃ ┃ ┃
├ ─┣━━╋━━━━━━━┫
│ ┃ ┃ ┃
└ ─┗━━┻━━━━━━━┛
┌ ─ ┬ ─ ┐
├ ─ ┼ ─ ┤
└ ─ ┴ ─ ┘
┌───┬───┐
├───┼───┤
│ │ │
└───┴───┘
.
╱│╲
↘╱ │ ╲ ↙
╱ │ ╲
→‹───┼───›←
╲ │ ╱
↗ ╲ │ ╱ ↖
╲│╱
↓↑
╳
.
╱ ╲
╱ ╲
╱ ⋰ ╲
╱⋰______╲Am I the only one believing common js was super ok and don't like esm? Or put differently I didn't see the necessity of having esm at all in Node. Let alone the browser, imagine loading tons of modules over the wire instead of bundle them
You can still bundle them, maybe you even should. Webpack still does a good job. Can also remove unused parts.
Hi, regarding streams interoperability I've documented how to handle file streams a while ago, after experimenting with Next.js old system (Node.js based) and new system (web based) : https://www.ericburel.tech/blog/nextjs-stream-files#2024-upd.... It sums up as "const stream = fileHandle.readableWebStream()" to produce a web stream using Node.js fs, rather than creating a Node.js stream.
This is great. I learned several things reading this that I can immediately apply to my small personal projects.
1. Node has built in test support now: looks like I can drop jest!
2. Node has built in watch support now: looks like I can drop nodemon!
I still like jest, if only because I can use `jest-extended`.
If you haven't tried vitest I highly recommend giving it a go. It is compatible with `jest-extended` and most of the jest matcher libraries out there.
Last time I tried it, IDE integration (e.g. Test Explorer in VSCode) was lacking compared to Jest.
I've heard it recommended; other than speed, what does it have to offer? I'm not too worried about shaving off half-a-second off of my personal projects' 5-second test run :P
I don’t think it’s actually faster than Jest in every circumstance. The main selling point, IMO, is that Vitest uses Vite’s configuration and tooling to transform before running tests. This avoids having to do things like mapping module resolution behaviour to match your bundler. Not having to bother with ts-jest or babel-jest is also a plus.
Jest is just not modern, it can't handle modern async/ESM/etc. out of the box. Everything just works in Vitest.
It has native TS and JSX support, excellent spy, module, and DOM mocking, benchmarking, works with vite configs, and parallelises tests to be really fast.
It also can transparently run tests directly in a browser rather than mocking the DOM, which is a very cool feature that I haven't used enough yet.
Eh, the Node test stuff is pretty crappy, and the Node people aren't interested in improving it. Try it for a few weeks before diving headfirst into it, and you'll see what I mean (and then if you go to file about those issues, you'll see the Node team not care).
I just looked at the documentation and it seems there's some pretty robust mocking and even custom test reporters. Definitely sounds like a great addition. As you suggest, I'll temper my enthusiasm until I actually try it out.
still I would rather use that than import mocha, chai, Sinon, istanbul.
At the end it's just tests, the syntax might be more verbose but Llms write it anyway ;-)
> but Llms write it anyway
The problem isn't in the writing, but the reading!
Nice post! There's a lot of stuff here that I had no idea was in built-in already.
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
[1]: https://nodejs.org/api/single-executable-applications.html
I have a blog post[1] and accompanying repo[2] that shows how to use SEA to build a binary (and compares it to bun and deno) and strip it down to 67mb (for me, depends on the size of your local node binary).
[1]: https://notes.billmill.org/programming/javascript/Making_a_s...
[2]: https://github.com/llimllib/node-esbuild-executable#making-a...
> 67 MB binary
I hope you can appreciate how utterly insane this sounds to anyone outside of the JS world. Good on you for reducing the size, but my god…
Considering that you are bundling an entire runtime not meant to be installed independently on other computers, 67mb isn't that bad.
Go binaries weight 20mb for example.
Have you looked at the average size of, say, a .NET bundled binary.
lol, yes absolutely it's bananas. I wouldn't even consider myself in the JS world!
Yeah, many people here are saying this is AI written. Possibly entirely.
It says: "You can now bundle your Node.js application into a single executable file", but doesn't actually provide the command to create the binary. Something like:
npx postject hello NODE_SEA_BLOB sea-prep.blob \
--sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2
Matteo Collina says that the node fetch under the hood is the fetch from the undici node client [0]; and that also, because it needs to generate WHATWG web streams, it is inherently slower than the alternative — undici request [1].
[0] - https://www.youtube.com/watch?v=cIyiDDts0lo
[1] - https://blog.platformatic.dev/http-fundamentals-understandin...
If anyone is curious how they are measuring these are the benchmarks: https://github.com/nodejs/undici/blob/main/benchmarks/benchm...
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
I really wish ESM was easier to adopt. But we're halfway through 2025 and there are still compatibility issues with it. And it just gets even worse now that so many packages are going ESM only. You get stuck having to choose what to cut out. I write my code in TS using ESM syntax, but still compile down to CJS as the build target for my sanity.
In many ways, this debacle is reminiscent of the Python 2 to 3 cutover. I wish we had started with bidirectional import interop and dual module publications with graceful transitions instead of this cold turkey "new versions will only publish ESM" approach.
Can you elaborate on the compatibility issues you ran into, with ESM, please? Are they related to specific libs or use-cases?
Don’t forget the native typescript transpiler which reduces the complexity a lot for those using TS
It strips TS, it does not transpile.
Things like TS enums will not work.
In Node 22.7 and above you can enable features like enums and parameter properties with the --experimental-transform-types CLI option (not to be confused with the old --experimental-strip-types option).
Excellent update! Thanks!
Exactly. You don't even need --experimental-strip-types anymore.
It's still not ready for use. I don't care Enum. But you can not import local files without extensions. You can not define class properties in constructor.
Enums and parameter properties can be enabled with the --experimental-transform-types CLI option.
Not being able to import TypeScript files without including the ts extension is definitely annoying. The rewriteRelativeImportExtensions tsconfig option added in TS 5.7 made it much more bearable though. When you enable that option not only does the TS compiler stop complaining when you specify the '.ts' extension in import statements (just like the allowImportingTsExtensions option has always allowed), but it also rewrites the paths if you compile the files, so that the build artifacts have the correct js extension: https://www.typescriptlang.org/docs/handbook/release-notes/t...
Why not import files with extensions? That's the way JS (and TS) import is actually supposed to work.
Why would you want to do either of those?
Both are very common Typescript patterns.
Maybe common in some legacy code bases. I recommend running with `erasableSyntaxOnly` for new code bases.
Importing without extensions is not a TypeScript thing at all. Node introduced it at the beginning and then stopped when implementing ESM. Being strict is a feature.
What's true is that they "support TS" but require .ts extensions, which was never even allowed until Node added "TS support". That part is insane.
TS only ever accepted .js and officially rejected support for .ts appearing in imports. Then came Node and strong-armed them into it.
Anyone else find they discover these sorts of things by accident. I never know when a feature was added but vague ideas of "thats modern". Feels different to when I only did C# and you'd read the new language features and get all excited. In a polyglot world and just the rate even individual languages evolve its hard to keep up! I usually learn through osmosis or a blog post like this (but that is random learning).
I recommend an excellent Node weekly [0] newsletter for all the things Node. It's been a reliable source of updates for over ten years.
Reading release notes would have solved that issue ;)
Which release notes. Id need to read hundreds!
I'm truly a fan of node (and V8) so once in a while (2-3 months?) I read their release notes and become aware of these things.
Sometimes I also read the proposals, https://github.com/tc39/proposals
I really want the pipeline operator to be included.
I think slowly Node is shaping up to offer strong competition to Bun.js, Deno, etc. such that there is little reason to switch. The mutual competition is good for the continued development of JS runtimes
Slowly, yes, definitely welcome changes. I'm still missing Bun's `$` shell functions though. It's very convenient to use JS as a scripting language and don't really want to run 2 runtimes on my server.
You might find your answer with `zx`: https://google.github.io/zx/
Or YavaScript https://github.com/suchipi/yavascript
Execa package works nicely for that. Zx has a good DX but is YA runtime.
https://github.com/sindresorhus/execa/blob/main/docs/bash.md
Starting a new project, I went with Deno after some research. The NPM ecosystem looked like a mess; and if Node's creator considers Deno the future and says it addresses design mistakes in Node, I saw no reason to doubt him.
Javascript is missing some feature that will take it to the next level, and I'm not sure what it is.
Maybe it needs a compile-time macro system so we have go full Java and have magical dependency injection annotations, Aspect-Oriented-Programming, and JavascriptBeans (you know you want it!).
Or maybe it needs to go the Ruby/Python/SmallTalk direction and add proper metaprogramming, so we can finally have Javascript on Rails, or maybe uh... Djsango?
I've been away from the node ecosystem for quite some time. A lot of really neat stuff in here.
Hard to imagine that this wasn't due to competition in the space. With Deno and Bun trying to eat up some of the Node market in the past several years, seems like the Node dev got kicked into high gear.
Something's missing in the "Modern Event Handling with AsyncIterators" section.
The demonstration code emits events, but nothing receives them. Hopefully some copy-paste error, and not more AI generated crap filling up the internet.
It's definitely ai slop. See also the nonsensical attempt to conditionally load SQLite twice, in the dynamic imports example.
The list of features is nice, I suppose, for those who aren't keeping up with new releases, but IMO, if you're working with node and js professionally, you should know about most, if not all of these features.
Hasn't AsyncIterator been available in Node for several years? I used it extensively—I want to say—around 3 years ago.
It's definitely awesome but doesn't seem newsworthy. The experimental stuff seems more along the lines of newsworthy.
Huh, I write a fair bit of Node and there was a lot new here for me. Like the built in test stuff.
Also hadn't caught up with the the `node:` namespace.
I think top level async is bad because won’t converge with js browser.
Node test also I dont think is great, because in isomorphic apps you’ll have 2 syntax for testing.
I think the permissions are the core thing we should do, even if we run the apps in docker/dev containers.
Aliases is nice, node:fetch but I guess will break all isomorphic code.
Top-level async (I assume you mean `await`) is available in browsers.
I'm just happy to see Node.js patterns as a #1 on HN after continually being dismissed from 2012-2018.
Between browsers and Electron, even those of us who hate this ecosystem are forced to deal with it, and if one does, at least one can do it with slightly more comfort using the newer tooling.
I see two classes of emerging features, just like in the browser:
1. new technologies
2. vanity layers for capabilities already present
It’s interesting to watch where people place their priorities given those two segments
> vanity layers for capabilities already present
Such as?
One man's "vanity layers?" is another man's ergonomics.
And in many of the cases talked about here, the "vanity layers" are massive interoperability improvements.
About time! The whole dragging the feet on ESM adoption is insane. The npm are still stuck on commonjs is quite a lot. In some way glad jsr came along.
I blame tooling folks doing too good of a job abstracting the problem away, and no this of course isn't a jab at them.
probably 70 to 80% of JS users have barely any idea of the difference because their tooling just makes it work.
The LLM made this sound so epic: "The node: prefix is more than just a convention—it’s a clear signal to both developers and tools that you’re importing Node.js built-ins rather than npm packages. This prevents potential conflicts and makes your code more explicit about its dependencies."
so in other words, it's a convention
Agreed. It's surprising to see this sort of slop on the front page, but perhaps it's still worthwhile as a way to stimulate conversation in the comments here?
I learned quite a few new things from this, I don't really care if OP filtered it through an LLM before publishing it
Same, but, I'm struggling with the idea that even if I learn things I haven't before, at the limit, it'd be annoying if we gave writing like this a free pass continuously - I'd argue filtered might not be the right word - I'd be fine with net reduction. Theres something bad about adding fluff (how many game changers were there?)
An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.
Then throw in the account being new, and, well, I hope it's not a harbinger.*
* It is and it's too late.
I too find it unreadable, I guess that's the downside of working on this stuff every day, you get to really hate seeing it.
It does tell you that if even 95% of HN can't tell, then 99% of the public can't tell. Which is pretty incredible.
I have an increasing feeling of doom re: this.
The forest is darkening, and quickly.
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
There's just so many tells in this one though and they aren'tn new ones. Like a dozen+, besides just the entire writing style being one, permeating through every word.
I'm also pretty shocked how HNers don't seem to notice or care, IMO it makes it unreadable.
I'd write an article about this but all it'd do is make people avoid just those tells and I'm not sure if that's an improvement.
i am desperately awaiting the butlerian jihad ;_;
Also no longer having to use an IIFE for top-level await is allegedly a „game changer.“
Be honest. How much of this article did you write, and how much did ChatGPT write?
To the latter: Absolutely all of it, though I put my money on Claude, it has more of its prominent patterns.
What, surely you’re not implying that bangers like the following are GPT artifacts!? “The changes aren’t just cosmetic; they represent a fundamental shift in how we approach server-side JavaScript development.”
And now we need to throw the entire article out because we have no idea whether any of these features are just hallucinations.
You don't know if the author knows what they're talking about with or without AI in the picture.
I think we have enough node developers here to know its truthfulness.
I'm not sure - a lot of the top comments are saying that this article is great and they learned a lot of new things. Which is great, as long as the things they learned are true things.
Good to see Node is catching up although Bun seems to have more developer effort behind it so I'll typically default to Bun unless I need it to run in an environment where node is better for compatibility.
Does the built-in test executor collect and report test coverage?
One thing you should add to section 10 is encouraging people to pass `cause` option while throwing new Error instances. For example
new Error("something bad happened", {cause:innerException})
It's wild that that's not what the section is about. Extending error is not new at-all.
Most people (including the author apparently) don't know they can chain errors with cause option in-built way in node and in browser. It is not just arbitrary extending and it is relatively a new thing. https://nodejs.org/api/errors.html#errorcause
Cool didn't know about this
Some good stuff in here. I had no idea about AsyncIterators before this article, but I've done similar things with generators in the past.
A couple of things seem borrowed from Bun (unless I didn't know about them before?). This seems to be the silver lining from the constant churn in the Javascript ecosystem
Thank you for this. Very helpful as I was just starting to dig into node for first time in a few years.
I feel like node and deno conventions are somehow merging (which is a good thing)
Yes around web standards
I think this partly at least is coming from the WinterCG efforts.
Deno has sandboxing tho
Will node one day absorb Typescript and use it as default?
My modern node.js pattern is to install bun now.
Why bother with node when bun is a much better alternative for new projects?
I am being sincere and a little self deprecating when I say: because I prefer Gen X-coded projects (Node, and Deno for that matter) to Gen Z-coded projects (Bun).
Bun being VC-backed allows me to fig-leaf that emotional preference with a rational facade.
I think I kind of get you, there's something I find off putting about Bun like it's a trendy ORM or front end framework where Node and Deno are trying to be the boring infrastructure a runtime should be.
Not to say Deno doesn't try, some of their marketing feels very "how do you do fellow kids" like they're trying to play the JS hype game but don't know how to.
Yes, that's it. I don't want a cute runtime, I want a boring and reliable one.
Deno has a cute mascot, but everything else about it says "trust me, I'm not exciting". Ryan Dahl himself also brings an "I've done his before" pedigree.
Because Bun is still far less mature (and the stack on which it builds is even less so - Zig isn't even 1.0).
Because its Node.js compat isn't perfect, and so if you're running on Node in prod for whatever reason (e.g. because it's an Electron app), you might want to use the same thing in dev to avoid "why doesn't it work??" head scratches.
Because Bun doesn't have as good IDE integration as Node does.
I haven't used it for a few months but in my experience, its package/monorepo management features suck compared to pnpm (dependencies leak between monorepo packages, the command line is buggy, etc), bun --bun is stupid, build scripts for packages routinely blow up since they use node so i end up needing to have both node and bun present for installs to work, packages routinely crash because they're not bun-compatible, most of the useful optimizations are making it into Node anyway, and installing ramda or whatever takes 2 seconds and I trust it so all of Bun's random helper libraries are of marginal utility.
because bun is written in a language that isn't even stable (zig) and uses webkit. None of the developer niceties will cover that up. I also don't know if they'll be able to monetize, which means it might die if funding dries up.
Why bother with bun when deno 2 is a much better alternative for new projects?
Why bother with deno 2 when node 22 is a much better alternative for new projects?
(closing the circle)
By the time you finish reading this guide and update your codebase, the state-of-the-art JS best practices have changed at least twice
Unless it changed how NodeJS handles this you shouldn't use Promise.all(). Because if more than one promise rejects then the second rejection will emit a unhandledRejection event and per default that crashes your server. Use Promise.allSettled() instead.
Promise.all() itself doesn't inherently cause unhandledRejection events. Any rejected promise that is left unhandled will throw an unhandledRejection, allSettled just collects all rejections, as well as fulfillments for you. There are still legitimate use cases for Promise.all, as there are ones for Promise.allSettled, Promise.race, Promise.any, etc. They each serve a different need.
Try it for yourself:
> node
> Promise.all([Promise.reject()])
> Promise.reject()
> Promise.allSettled([Promise.reject()])
Promise.allSettled never results in an unhandledRejection, because it never rejects under any circumstance.
This didn't feel right so I went and tested.
process.on("uncaughException", (e) => {
console.log("uncaughException", e);
});
try {
const r = await Promise.all([
Promise.reject(new Error('1')),
new Promise((resolve, reject) => {
setTimeout(() => reject(new Error('2'), 1000));
}),
]);
console.log("r", r);
} catch (e) {
console.log("catch", e);
}
setTimeout(() => {
console.log("setTimeout");
}, 2000);
Produces: alvaro@DESKTOP ~/Projects/tests
$ node -v
v22.12.0
alvaro@DESKTOP ~/Projects/tests
$ node index.js
catch Error: 1
at file:///C:/Users/kaoD/Projects/tests/index.js:7:22
at ModuleJob.run (node:internal/modules/esm/module_job:271:25)
at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:547:26)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)
setTimeout
So, nope. The promises are just ignored.So they did change it! Good.
I definitely had a crash like that a long time ago, and you can find multiple articles describing that behavior. It was existing for quite a time, so I didn't think that is something they would fix so I didn't keep track of it.
Typo? ”uncaughException”
When using Promise.all(), it won't fail entirely if individual promises have their own .catch() handlers.
Subtle enough you’ll learn once to not do that again if you’re not looking for that behavior.
Node now has limited supports for Typescript and has SQLite built in, so it becomes really good for small/personal web oriented projects.
The constant churn.
"SlopDetector has detected 2 x seamlessly and 7 x em-dash, would you like to continue?"
I use em-dash these days just to trigger tinfoil hats like you.
Screaming "you’re not just writing contemporary code—you’re building applications that are more maintainable, performant, and aligned"
fwiw i use em-dashes
I did too. :( I had to stop using them after people started assuming only LLMs use them.
we cannot escape the AI generated slop can we?
online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?
I could never get into node but i've recently been dabbling with bun which is super nice. I still don't think i'll give node a chance but maybe i'm missing out.
I love Node's built-in testing and how it integrates with VSCode's test runner. But I still miss Jest matchers. The Vitest team ported Jest matchers for their own use. I wish there were a similar compatibility between Jest matchers and Node testing as well.
Currently for very small projects I use the built in NodeJS test tooling.
But for larger and more complex projects, I tend to use Vitest these days. At 40MBs down, and most of the dependency weight falling to Vite (33MBs and something I likely already have installed directly), it's not too heavy of a dependency.
It is based on vite and a bundler has no place in my backend. Vite is based on roll-up, roll-up uses some other things such as swc. I want to use typescript projects and npm workspaces which vite doesn't seem to care about.
assertions in node test feel very "technically correct but kind of ugly" compared to jest, but I'll use it anyway
yes but consider this Jest code, replicating such in node testing is painful. testing code should be DSL-like, should be very easy to read.
expect(bar).toEqual(
expect.objectContaining({
symbol: `BTC`,
interval: `hour`,
timestamp: expect.any(Number),
o: expect.any(Number),
h: expect.any(Number),
l: expect.any(Number),
c: expect.any(Number),
v: expect.any(Number)
})
);
Is current node.js a better language than .NET 6/7/8/9, why or why not?
Node.js is a runtime, not a language. It is quite capable, but as per usual, it depends on what you need/have/know, ASP.NET Core is a very good choice too.
> ASP.NET Core is a very good choice too.
I have found this to not be true.
Recently?
In my experience ASP.NET 9 is vastly more productive and capable than Node.js. It has a nicer developer experience, it is faster to compile, faster to deploy, faster to start, serves responses faster, it has more "batteries included", etc, etc...
What's the downside?
Compile speed and a subjective DX opinion are very debatable.
The breadth of npm packages is a good reason to use node. It has basically everything.
It has terrible half-completed versions of everything, all of which are subtly incompatible with everything else.
I regularly see popular packages that are developed by essentially one person, or a tiny volunteer team that has priorities other than things working.
Something else I noticed is that NPM packages have little to no "foresight" or planning ahead... because they're simply an itch that someone needed to scratch. There's no cohesive vision or corporate plan as a driving force, so you get a random mish-mash of support, compatibility, lifecycle, support, etc...
That's fun, I suppose, if you enjoy a combinatorial explosion of choice and tinkering with compatibility shims all day instead of delivering boring stuff like "business value".
If your willing to stick to pure MS libraries…
I used to agree but when you have libraries like Mediatr, mass transit and moq going/looking to go paid I’m not confident that the wider ecosystem is in a much better spot.
No. Because C#, while far from perfect, is still a drastically better language than JS (or even TS), and .NET stdlib comes with a lot of batteries included. Also because the JS package ecosystem is, to put it bluntly, insane; everything breaks all the time. The probability of successfully running a random Node.js project that hasn't been maintained for a few years is rather low.
In my experience, no.
It's still single-threaded, it still uses millions of tiny files (making startup very slow), it still has wildly inconsistent basic management because it doesn't have "batteries included", etc...
You can bundle it all into one file and it's not single threaded anymore. There's this thing called worker_threads.
But yes there are downsides. But the biggest ones you brought up are not true.
> You can bundle it all into one file
This is the first I'm hearing of this, and a quick Google search found me a bunch of conflicting "methods" just within the NestJS ecosystem, and no clear indication of which one actually works.
nest build --webpack
nest build --builder=webpack
... and of course I get errors with both of those that I don't get with a plain "nest build". (The error also helpfully specifies only the directory in the source, not the filename! Wtf?)Is this because NestJS is a "squishy scripting system" designed for hobbyists that edit API controller scripts live on the production server, and this is the first time that it has been actually built, or... is it because webpack has some obscure compatibility issue with a package?
... or is it because I have the "wrong" hieroglyphics in some Typescript config file?
Who knows!
> There's this thing called worker_threads.
Which are not even remotely the same as the .NET runtime and ASP.NET, which have a symmetric threading model where requests are handled on a thread pool by default. Node.js allows "special" computations to be offloaded to workers, but not HTTP requests. These worker threads can only communicate with the main thread through byte buffers!
In .NET land I can simply use a concurrent dictionary or any similar shared data structure... and it just works. Heck, I can process a single IEnumerable, list, or array using parallel workers trivially.
If you read my comment I said there are downsides:
"But yes there are downsides. But the biggest ones you brought up are not true."
My point is.. what you said is NOT true. And even after you're reply, it's still not true. You brought up some downsides in your subsequent reply... but again, your initial reply wasn't true.
That's all. I acknowledge the downsides, but my point remains the same.
Perhaps the technology that you are using is loaded with hundreds of foot-guns if you have to spend time on enforcing these patterns.
Rather than taking the logical focus on making money, it is wasting time on shuffling around code and being an architecture astronaut with the main focus on details rather than shipping.
One of the biggest errors one can make is still using Node.js and Javascript on the server in 2025.
JS on the backend was arguably an even bigger mistake when the JS ecosystem was less sophisticated. The levels of duct tape are dizzying. Although we might go back even further and ask if JS was also a mistake when it was added to the browser.
I often wonder about a what-if, alternate history scenario where Java had been rolled out to the browser in a more thoughtful way. Poor sandboxing, the Netscape plugin paradigm and perhaps Sun's licensing needs vs. Microsoft's practices ruined it.
What the f are you even talking about. It literally lists features modern Node.js has, there’s nothing to enforce.
Yet more architecture astronaut behavior by people who really should just be focusing on ifs, fors, arrays, and functions.
Architecture astronaut is a term I hadn't heard but can appreciate. However I fail to see that here. It's a fair overview of newish Node features... Haven't touched Node in a few years so kinda useful.
It's a good one with some history and growing public knowledge now. I'd encourage a deep dive, it goes all the way back to at least CPP and small talk.
While I can see some arguments for "we need good tools like Node so that we can more easily write actual applications that solve actual business problems", this seems to me to be the opposite.
All I should ever have to do to import a bunch of functions from a file is
"import * from './path'"
anything more than that is a solution in search of a problem
Isn't that exactly the syntax being recommended? Could you explain what exactly in the article is a solution in search of a problem?
Did you read the article? Your comments feel entirely disconnected from its contents - mostly low level piece or things that can replace libraries you probably used anyway
what? This is an overview of modern features provided in a programming language runtime. Are you saying the author shouldn’t be wasting their time writing about them and should be writing for loops instead? Or are you saying the core devs of a language runtime shouldn’t be focused on architecture and should instead be writing for loops?
One of the core things Node.js got right was streams. (Anyone remember substack’s presentation “Thinking in streams”?) It’s good to see them continue to push that forward.
Why? Why is a stream better than an array? Why is the concept of a realtime loop and for looping through a buffer not sufficient?
I think there are several reasons. First, the abstraction of a stream of data is useful when a program does more than process a single realtime loop. For example, adding a timeout to a stream of data, switching from one stream processor to another, splitting a stream into two streams or joining two streams into one, and generally all of the patterns that one finds in the Observable pattern, in unix pipes, and more generally event based systems, are modelled better in push and pull based streams than they are in a real time tight loop. Second, for the same reason that looping through an array using map or forEach methods is often favored over a for loop and for loops are often favored over while loops and while loops are favored over goto statements. Which is that it reduces the amount of human managed control flow bookkeeping, which is precisely where humans tend to introduce logic errors. And lastly, because it almost always takes less human effort to write and maintain stream processing code than it does to write and maintain a real time loop against a buffer.
Hopefully this helps! :D
Streams have backpressure, making it possible for downstream to tell upstream to throttle their streaming. This avoids many issues related to queuing theory.
That also happens automatically, it is abstracted away from the users of streams.
A stream is not necessarily always better than an array, of course it depends on the situation. They are different things. But if you find yourself with a flow of data that you don't want to buffer entirely in memory before you process it and send it elsewhere, a stream-like abstraction can be very helpful.
Why is an array better than pointer arithmetic and manually managing memory? Because it's a higher level abstraction that frees you from the low level plumbing and gives you new ways to think and code.
Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling
Backpressure signaling can be handled with your own "event loop" and array syntax.
Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.
There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.