For the hundreds of people reading this article right now - you might be amused to know that you're accessing it from a mac mini on my desk:
https://www.contraption.co/a-mini-data-center/
(The CPU load from this is pretty negligible).
What is HackerNews but a system to stress test everyone's hobby websites?
Before this Digg, before that Slashdot.
What else am I missing?
Kuro5hin was pretty big, back in the day. But /. was the biggie, along with need to know. We get the term slashdotted from there, after all.
Holy… I still miss kuro5hin. Wonder what is rusty doing nowadays.
thank you!
Before we were all on USENET, some lucky ones were on Compuserve and AOL, and BBSs were limited by phone lines, not really anything to test loads.
>Before this Digg, before that Slashdot.
>What else am I missing?
You are missing Reddit.
/u/seddit
del.icio.us
anyone?
Fark
Every time I share a project I provide two links, one for my vps and another one for GitHub pages. Usually my projects run on the client, so I have never experienced the hug of death myself.
I absolutely love this comment <3
back in my day kid we used to serve far more users from 40mhz CPUs. The only interesting part is that today you can get pipes fast enough to do this in your house, while back then dialup was all we could afford ($1000/month to get into the 1 megabit/second range, ISDN and DSL came soon after and were nice).
Of course back then we didn't use dynamic anything, a static web page worked.
Now get off my lawn!
My first company website was served of a 120MHz Pentium that also served as the login server where 5 of us ran our X clients (with the X servers on 486's with 16MB RAM)...
And it wasn't static: We because peoples connections were mostly so slow, we used a CGI that shelled out to ping to estimate connection speed, and return either a static image (if you were on a dialup) or a fancy animated gif if you were on anything faster.
(the ping-test was obviously not reliable - if you were visiting from somewhere with high latency, you'd get the low bandwidth version too, no matter how high your throughput was - but that was rare enough; it worked surprisingly well)
I used to host 3,000 active daily users from a 33mhz 486 with a 56k modem.
Thousands and thousands of lines of quality conversation, interaction, humanity.
To be honest, I kind of miss those days.
I love to think that the web of the future is just going to be everyones' mac Mini or whatever.
Big Data™ has always irked me, frankly.
Everyone moved too fast into the future, and this is perhaps not that good. The whole ASCII and 90s/cyberpunk nostalgia is being a major cue.
We need something that’s small, cheap, plugs into a power outlet (or a PoE port), and lets anyone serve their personal little node of their distributed social network.
I started thinking about that around an implementation that could run under Google’s App Engine free tier, but never completed it.
I like that you're pointing out application longevity in the linked article. It seems that new SaaS apps appear and disappear daily as cloud hosting isn't cheap (especially for indie hackers). I'd much rather sign up for an app that I knew wouldn't randomly disappear in a couple of months when the cloud bills surpass the profits.
I took a startup from zero to 100k MRR as of last month over the last 5 years. I can tell you that cloud billing is the least of your concerns if you pay even the cursory attention to writing good queries and adding indexes in the right places. The real issue is the number of developers who never bother to learn how to structure data in a database for their use case. properly done, you can easily support thousands of paying users on a single write server.
A bit hand wavy. It obviously depends on the business and what "least of concerns" entails.
In most cases businesses justify the cost of managed databases for less risk of downtime. A HA postgres server on crunchy can cost over $500/mo for a measly 4vCPU.
I would agree that it's the least of concerns but for a different reason. Spending all your time optimizing for optimal performance (assuming sensible indexing for what you have) by continuously redesigning your DB structure when you don't even know what your company will be doing next year isn't worth the time for a few hundred a month you might save.
I use CQRS with /dev/null for writes and /dev/random for reads. It's web scale, it's cheap and it's fast.
> I can tell you that cloud billing is the least of your concerns if you pay even the cursory attention to writing good queries and adding indexes in the right places.
I read this as "in building your startup, you should be paranoid about team members never making mistakes". I really try to read otherwise, but can't.
What? No no, to be fast you need the whole database only in RAM! And SQL is hard so just make it a giant KV store. Schemas are also hard so all values are just amorphous JSON blobs. Might as well store images in the database too. Since it's RAM it'll be so fast!
/s
What kind of Mac mini do you use (cpu and ram)? I’m really interested in making the same thing but I’m not sure if the base M4 mini is enough with just 16gb of ram.
Depends what you're doing. If it's literally just serving up some static web pages though that is hilariously over specified, you're going to be constrained by your internet connection long before that Mac Mini starts breaking a sweat.
That's amazing. Mac Mini is very efficient and is a great little home server. Idles at 3-4w total for the entire machine. Plus, the M4 is a beast of a CPU. It might even be possible to serve a small LLM model like a 3b model on it over the internet.
Yeah, the mac minis can have up to 64GB of ram which would support some usable models. However, I accidentally got one with 24gb of ram, and my apps already use 12gbs. So, perhaps I'll get a second box just for LLMs!
A small model like 1B or 3B should be ok with 16GB. I was thinking in the name of savings, you can just use the same machine.
It's a cool project. I might do it too. I have an M4 Mini sitting on my desk that I got for $550.
I've been thinking about that article for the past week so much that I've been looking at $250 Ryzen 7 5700U 16/512/2.5G Ace Magician NUCs to move some of my properties to. They're known to be shipping spyware on their Windows machines, but my thought was that I'd get 3 of them, clear them out with Debian, and set them up as a k8s cluster and have enough horsepower to handle postgres at scale.
Get NUC, or one of those refurbished Dell or HP mini PCs. They have plenty of CPU power, consume very little idle power, and friendly to Linux.
I have been wildly happy with my EliteDesk mini pcs. Mine are the “G5” generation which cost like $60-150 on eBay with varying specs, obviously newer generations have better specs but for my “homelab” needs these have been great. I even put a discrete GPU ($60) in my AMD one for a great little Minecraft machine for playing with the kid.
I have a g5 elitedesk small form factor (about the size of a largr cereal box, not a book) pc, thats been runnimg my by media server and torrent download services for years now. It has a plucky little 10th gen i3 or something, and it has been more than enough. Can real time transcode 4K movies! Dead quiet and sips electricity. Uptime is on average about 8-10 months.
Glad it resonated with you!
If you're considering k8s, take a look at Kamal (also from DHH): https://kamal-deploy.org/
I think it makes more sense for small clusters.
It probably does! Kamal/MRSK has been on the roadmap for awhile. I have deliberately endeavored to keep the existing k8s setup as minimal as possible, and it's still grown to almost unruly. That said, it works well enough across the (surprisingly power efficient) Dell C1100s in the basement, so it'd take a migration to justify, which is of course the last thing you can justify this with.
Presumably CF is doing most of the work if the page doesnt actually change all that much?
Nobody's actually doing work because serving web pages is cheap.
Is it really cheap through ruby?
Yeah, but there's Plausible Analytics self-hosted on the mac mini that's getting more of the load right now.
It's fun to host at home, I run docker on alpine VMs on two proxmox machines. Yeah, different docker machines for each user or use-case look complicated but it works fine and I can mount nfs or samba mounts as needed. Only thing I have on the cloud is a small hetzner server which I mostly use as a nginx proxy and iptables is great for that minecraft VM.
Why did you go for Cloudfare tunnel instead of wireguard?
Cloudflare Tunnel provides you a publicly routable address for free. With wireguard you would still need a VM somewhere, and if you are hosting your own VM, then whats the point?
It's a small cost of $4.50/month and allows me a lot more control. In regards to wireguard, that one VM I pay for is the central wireguard node for all sorts of devices that I use, allowing me to securely access home services when I'm not at home. There are services you don't want to expose directly via a Cloudfare Tunnel.
Not making Cloudflare more of a central point of failure for the internet? We hosted web pages before they MITM'd the entire web.
Public IPv4 address exhausted and NAT happened.
Even having IPv6 is not a proper solution because of laggy ISPs(currently reaching ~50%) and the even the ISPs who deploy, do not deploy it properly. (dynamic prefixes or inbound blocked IPv6)
Add to the mix that lot of people does not understand IPv6, internet became more centralized and will keep doing so for the foreseeable future.
I like how they have amazing great free services and people are upset so many people use it.
That's what we all said about various Google products many years ago, too.
> We hosted web pages before they MITM'd the entire web.
We also hosted web pages before the average script kiddie could run tens of Gbps DDoS on sites for the lolz. And before ISPs used CGNAT making direct inbound connections impossible.
But you are using someone else’s VM. You just don’t pay for it.
I actually read that blog post too last week (or the week before?) and I’m genuinely considering this.
Render is crazy expensive for blog sites and hobby apps.
Here's the core scripts I use for the mac mini. They're a bit raw, but hopefully useful:
https://gist.github.com/philipithomas/ed57890dc2f928658d2c90...
I was using an old Samsung s8, with a USBc ethernet adaptor it was more then capable serving allot of requests.
Weirdly, that tower in the photo is also on the front page of HN right now
Ah - I took that photo on the way to Mount Olympus Park, which is one of my favorite little parks in SF. It has an interesting history:
Nice!
Love all the projects you have going. Do you use a template for the landing pages? Or DIY? They look great!
Thanks! Postcard was made with some help of TailwindUI.com. Booklet's homepage is based on a Webflow template: https://webflow.com/templates/html/gather-saas-website-templ...
Now do it without Cloudflare :)
I wrote a blog post that generated a lot of traffic on HackerNews last year when it briefly was on #1 here. My blog was (and still is) hosted on a 9-year old Dell Latitude E7250 with Intel Core i5-6300U processor. The server held up fine with ~350 concurrent readers at its peak. It was actually my fiber router that had trouble keeping up. But even though things got a bit slow, it held up fine, without Cloudflare or anything fancy.
Perhaps some day.
My shorter-term goal is to switch my home internet to Starlink, so that all requests bounce off a satellite before landing at my desk.
Except Starlink uses CGNAT, which means you need some external SSHD port forwarding at least.
He could keep using Cloudflare Tunnel, but then he's still using Cloudflare
Trivial, even for a high traffic website to be served from a fiber connection.
Computers are stupid good at serving files over http.
I’ve served (much) greater-than-HN traffic from a machine probably weaker than that mini. A good bit of it dynamic. You just gotta let actual web servers (apache2 in that case) serve real files as much as possible, and use memory cache to keep db load under control.
I’m not even that good. Sites fall over largely because nobody even tried to make them efficient.
I’m reminded of a site I was called in to help rescue during the pandemic. It was a site that was getting a lot higher traffic (maybe 2-3x) than they were used to, a Rails app on Heroku. These guys were forced to upgrade to the highest postgres that Heroku offered - which was either $5k or $10k a month, I forget - for not that many concurrent users. Turns out that just hitting a random piece of content page (a GET) triggered so many writes that it was just overwhelming the DB when they got that much traffic. They were smart developers too, just nobody ever told them that a very cacheable GET on a resource shouldn’t have blocking activities other than what’s needed, or trigger any high-priority DB writes.
And nobody knows how stuff works at the web server level anymore... The C10K problem was solved a long time ago. Now it's just embarrassing.
If only my part of SF had fiber service. #1 city for tech, but I still have to rely on Comcast.
Sounds weird to read that from Western Europe where even the most rural places have fiber!
I understand that the USA is big, but no fiber in SF?
In the US, it’s not about money or demand. The more entrenched cities (especially in California, for some historic reasons/legislation) tend to have a much more difficult time getting fiber installed. It all comes down to bureaucracy and NIMBYism.
SF is mostly served by AT&T, who abandoned any pretense of upgrading their decrepit copper 20 years ago, and Comcast, whose motto is “whatcha gonna do, go get DSL?”
AT&T has put fiber out in little patches, but only in deals with a guaranteed immediate ROI, so it would mean brand new buildings, where they know everyone will sign up, or deals like my old apartment, where they got their service included in the HOA fee, so 100% adoption rate guaranteed! AT&T loves not competing for business.
Sure, others have been able to painstakingly roll out fiber in some places, but it costs millions of dollars to string fiber on each street and to get it to buildings.
Lived in an older neighborhood in Georgia a couple years back. A new neighborhood across the street had it (AT&T), but we didn't.
Caught an AT&T tech in the field one day, and he claimed that if 8 (or 10—memory's a little fuzzy) people in the neighborhood requested it, they'd bring it in.
I never did test it, but thought it interesting that they'd do it for that low a number. Of course, it may have been because it was already in the area.
Still, may be worth the ask for those who don't already have it.
> where even the most rural places have fiber!
No need for the hyperbole. I know for a fact that you don't get fiber in the remote countryside of France
we have fiber in half of SF via Sonic - where there are overhead wires. The other half of SF has its utilities underground making economics more difficult.
https://bestneighborhood.org/fiber-tv-and-internet-san-franc... has a detailed map, by provider, if you wanna dig into the gory details, but there is fiber, just not everywhere.
Not where I am
Been using a setup following this for quite a while. Nginx reverse proxy on a cheap VPS with a wireguard tunnel to home.
A mac mini is pretty beefy for hosting a blog!
I’ve had a number of database-driven sites hosted on $5/month VPS that have been on the front page here with minimal cpu or memory load.
It's hosting a variety of apps - blog (Ghost), plausible analytics, metabase, and soon 3 Rails apps. It's unfortunately running Postgres, MySQL, and Clickhouse.
Is cloudflare tunnels really this free to support thousands of internet requests?
I run a windows server at my office where we connect to it using RDP from multiple locations. If I could instead buy the hardware and use cloudflare tunnels to let my team RDP to it then it would save me a lot of money. I could recoup my hardware cost in less than a year. Would this be possible?
(I wouldn't mind paying for cloudflare tunnels / zero trust. It just should be much smaller than the monthly payment I make to Microsoft)
I used Cloudflare Tunnels for a project that had hundreds of tunnels did roughly 10GB/day of traffic entirely for free. The project has since moved to Cloudflare Enterprise, where the project pays the opposite of free, but was completely expected as the project grew.
I'm pretty sure Tunnels supports RDP and if you don't use a ton of bandwidth (probably under a 1TB/mo), Cloudflare probably won't bother you.
Yup. Cloudflare's typical proxy already handles massive amounts of traffic, so I expect that the marginal cost of this reverse proxy isn't that high.
I do think Cloudflare has proven itself to be very developer/indie-friendly. One of the only tech unicorns that really doesn't impose its morality on customers.
I see you're serving a GTS certificate. Does GCP allow you to download TLS certificates? I honestly didn't know. I thought just like AWS, you get them only when using their services like load balancers, app runners etc.
Not OP but the site sits behind Cloudflare and Cloudflare uses Google Trust GTS and LetsEncrypt for Edge certificates.
https://developers.cloudflare.com/ssl/reference/certificate-...
How much does it cost to keep the mac mini on for a month? I’ve been thinking doing the same.
Pretty cool. Wouldn't work for me as my ISP is horrendously unreliable (Rogers in Canada, I swear they bounce their network nightly), but I might consider colocating a mac mini at a datacenter.
the article is AI-generated isn't it ?
Nope
Lol, by just reading I knew it was. Then I used an AI detection tool and it says 100% sure it is AI-generated. You know how hard it is to get 100% sure it is AI-generated ?
Most "AI detection tools" are just the equivalent of a Magic 8 ball.
In fact, most of them are just implemented by feeding an LLM the text, and asking "is it AI generated?". You cannot trust that answer any more than any other LLM hallucination. LLMs don't have a magic ability to recognise their own output.
Even if your "detection tool" was using exactly the same model, at the same exact version... unless the generation was done with 0 temperature, you just wouldn't be able to confirm that the tool would actually generate the same text that you suspect of being LLM-generated. And even then, you'd need to know exactly the input tokens (including the prompt) used.
Currently, the only solution is watermarking, like what Deepmind created:
https://deepmind.google/discover/blog/watermarking-ai-genera...
but even that, it requires cooperation from all the LLM vendors. There's always going to be one (maybe self-hosted) LLM out there which won't play ball.
If you're going to accuse someone of pushing LLM-generated content, don't hide behind "computer said so", not without clearly qualifying what kind of detection technique and which "detection tool" you used.
I am starting to believe this is a lie spread by AI companies because if AI-slop starts to be detected at scale, it kills their primary use case. True, AI detection tools are not perfect, like any classification algo, they don't have a 100% accuracy. But it does not mean they are useless. They give useful probabilities. If AI detectors are so wrong, how do you explain that passing AI generated text on gptzero and it gets it all the time, same when I pass human written content it recognises it as such almost 99% of the time.
It's the false positives that make it useless. Even if it's generally very good at detecting AI, the fact that it can and does throw false positives (and pretty frequently) means that nothing it says means anything.
You can kind of tell it's not AI when it gets beyond the generic stuff and on to say
>Today I'm working on Chroma Cloud, designed for exploring and managing large datasets, and Next.js powers its advanced interactions and data loading requirements.
which is unlikely to have been written by an LLM.
you can inject personal stuff to make it feel original, but huge chunks are still AI-generated. Just get the first 4/5 paragraphs and paste in gptzero
Well on the one hand you have gptzero saying it's in the style of AI which I don't count as reliable and on the other you have the author saying it's not which I weight higher.
And it mostly makes too much sense apart from "most drivers don't know how many gears their car has" which has me thinking huh? It's usually written on the shifter.
Hosting from home is fun, I guess, but it actually was a money-saving exercise before the cloud. I've done it.
Now, however, what is the point? To learn server config? I am running my blog with GitHub pages. A couple of posts made it to the top of HN, and I never had to worry.
Always bewilders me when some sites here go down under load. I mean, where are they hosting it that a static page in 2020s has performance issues?
That makes sense, because serving a web page to a few hundred people is not a computationally expensive problem. :3
I self-host analytics on the box (Plausible), which is using more resources than the website. There are a few apps on there, too.
Plausible is hardy compute intensive
Very cool! Do you have a contingency in place for things like power outages?
Not really . . . Cloudflare Always Online, mostly.
I had 2m35s of downtime due to power outages this week.
A MacBook Air solves this problem very nicely!
Not only does is have a built in UPS, but also comes with a screen, keyboard and trackpad for you need to do admin tasks physically att the console!
Nice
I really like web apps that are just CRUD forms. It obviously doesn't work for everything, but the "list of X -> form -> updated list of X" user experience works really well for a lot of problem domains, especially ones that interact with the real world. It lets you name your concepts, and gives everything a really sensible place to change it. "Do I have an appointment, let me check the list of appointments".
Contrast that, to more "app-y" patterns, that might have some unifying calendar, or mix things into a dashboard. Those patterns are also useful!! And of course, all buildable in rails as well. But there is something nice about the simplicity of CRUD apps when I end up coming across one.
So even though you can build in any style with whatever technology you want:
Rails feels like it _prefers_ you build "1 model = 1 concept = 1 REST entity"
Next.js (+ many other FE libraries in this react-meta-library group) feels like it _prefers_ you build "1 task/view = mixed concepts to accomplish a task = 1 specific screen"
The problem with 1 model = 1 rest entity (in my experience) is that designers and users of the applications I have been building for years never want just one model on the screen.
Inevitably, once one update is done, they'll say "oh and we just need to add this one thing here" and that cycle repeats constantly.
If you have a single page front end setup, and a "RESTful" backend, you end up making a dozen or more API calls just to show everything, even if it STARTED out as narrowly focused on one thing.
I've fought the urge to use graphql for years, but I'm starting to think that it might be worth it just to force a separation between the "view" of the API and the entities that back it. The tight coupling between a single controller, model and view ends up pushing the natural complexity to the wrong layer (the frontend) instead of hiding the complexity where it belongs (behind the API).
Why the assumption that an API endpoint should be a 1:1 mapping to a database table? There is no reason we need to force that constraint. It's perfectly legitimate to consider your resource to encompass the business logic for that use case. For example, updating a user profile can involve a single API call that updates multiple data objects - Profile, Address, Email, Phone. The UI should be concerned with "Update Profile" and let the API controller orchestrate all the underlying data relationships and updates.
You seem to be in agreement with the parent, who argues 1 model (aka database row) = 1 rest entity (aka /widgets/123) is a bad paradigm.
Different widget related front-end views will need different fields and relations (like widget prices, widget categories, user widget history and so on).
There are lots of different solutions:
- Over fetching. /widgets/123 returns not only all the fields for a widget, but more or less every possible relation. So a single API call can support any view, but with the downside that the payload contains far more data than is used by any given view. This not only increases bandwidth but usually also load on the database.
- Lots of API calls. API endpoints are tightly scoped and the front-end picks whichever endpoints are needed for a given view. One view calls /widgets/123 , /widgets/123/prices and /widgets/123/full-description. Another calls /widgets/123 and /widgets/123/categories. And so on. Every view only gets the data it needs, so no over fetching, but now we're making far more HTTP requests and more database queries.
- Tack a little "query language" onto your RESTful endpoints. Now endpoints can do something like: /widgets/123?include=categories,prices,full-description . Everyone gets what they want, but a lot of complexity is added to support this on the backend. Trying to automate this on the backend by having code that parses the parameters and automatically generates queries with the needed fields and joins is a minefield of security and performance issues.
- Ditch REST and go with something like GraphQL. This more or less has the same tradeoffs as the option above on the backend, with some additional tradeoffs from switching out the REST paradigm for the GraphQL one.
- Ditch REST and go RPC. Now, endpoints don't correspond to "Resources" (the R in rest), they are just functions that take arguments. So you do stuff like `/get-widget-with-categories-and-prices?id=123`, `/get-widget?id=123&include=categories,prices`, `/fetch?model=widget&id=123&include=categories,prices` or whatever. Ultimate flexibility, but you lose the well understood conventions and organization of a RESTful API.
After many years of doing this lots of time, I pretty much dislike all the options.
Lots of API calls scales pretty well, as long as those APIs aren't all hitting the same database. You can do them in parallel. If you really need to you can build a view specific service on the backend to do them in parallel but with shorter round-trips and perhaps shared caches, and then deliver a more curated response to the frontend.
If you just have one single monolithic database, anything clever you do on the other levels just lets you survive until the single monolithic database becomes the bottle-neck, where unexpected load in one endpoint breaks several others.
Webapps are going back to multiple requests because of http2 / quic multiplexing.
So what do you do instead?
I do one or some combination of the options above. I've also tried some more exotic variations of things on the list like Hasura or following jsonapi.org style specs. I haven't found "the one true way" to structure APIs.
When a project is new and small, whatever approach I take feels amazing and destined to work well forever. On big legacy projects or whenever a new project gets big and popular, whatever approach I took starts to feel like a horrible mess.
No, it's an API Entity can be composed of sub-entities which may or may not exposed directly via API.
That's what https://guides.rubyonrails.org/association_basics.html is for.
However, Rails scaffolding is heavily geared towards that 1:1 mapping - you can make all CRUD endpoints, model and migration with a single command.
If you lean into more 1:1 mappings (not that a model can't hold FKs to submodels), then everything gets stupid easy. Not that what you're saying is hard... just if you lean into 1:1 it's _very easy_. At least for Django that's the vibe.
Rails began that trend by auto-generating "REST" routes for 1:1 table mapping to API resource. By making that so easy, they tricked people into idealizing it
Rails' initial rise in popularity coincided with the rise of REST so these patterns spread widely and outlasted Rails' mindshare
> you end up making a dozen or more API calls just to show everything
This is fine!
> I've fought the urge to use graphql for years
Keep fighting the urge. Or give into it and learn the hard way? Either way you'll end up in the same place.
The UI can make multiple calls to the backend. It's fine.
Or you can make the REST calls return some relations. Also fine.
What you can't do is let the client make arbitrary queries into your database. Because somebody will eventually come along and abuse those APIs. And then you're stuck whitelisting very specific queries... which look exactly like REST.
GraphQL is not arbitrary queries into your database! Folks need to really quit misunderstanding that.
You can define any schema and relations you want, it's not an ORM.
In the spectrum of "remote procedure call" on one end and "insert sql here" on the other end, GraphQL is waaaaay closer to SQL than RPC.
No it’s not, graphql is an rpc that returns a tree of objects where you can indicate what part of the tree is relevant to you.
Yep. It is not trivial to make it into a pseudo-SQL language, like Hasura did.
Funny enough, see this assumption frustrating a lot of people who try to implement GraphQL APIs like this.
And even if you do turn it into a pseudo-SQL, there's still plenty of control. Libraries allow you to restrict depth, restrict number of backend queries, have a cost function, etc.
...and that's exactly the problem! Without a lot of hardening, I (a hostile client) can suck down any part of the database you make available. With just a few calls.
GraphQL is too powerful and too flexible to offer to an untrusted party.
This is a silly argument and sounds like a hot take from someone who's never used this. You could say the same about REST or whatever. It has nothing to do with "the database".
You sound like someone that's never had an adversarial client. I spent years reverse engineering other companies' web APIs.
REST calls are fairly narrowly tailored, return specific information, and it's generally easy to notice when someone is abusing them. "More like RPC".
Your naive GraphQL API, on the other hand, will let me query large chunks of your database at a time. Take a look at Shopify's GraphQL API to see the measures you need to take to harden an API; rate limits defined by the number of nodes returned, convoluted structures to handle cursoring.
GraphQL is the kind of thing that appeals to frontend folks because they can rebalance logic towards the frontend and away from the backend. It's generally a bad idea.
It is arbitrary queries though? I can send any query that matches your schema and your graphql engine is probably going to produce some gnarly stuff to satisfy those queries.
You need to program every query resolver yourself, it's not tied to some ORM.
There are of course products that do this automatically, but it's not really that simple. There's a reason things like Hasura are individual products.
No when I say "schema" I mean the GraphQL structure, not your DB schema.
The GraphQL structure can be totally independent from your DB if need be, and (GraphQL) queries on those types via API can resolve however you need and are defined by you. It's not a SQL generator.
The problem is not that you'll expose some part of the database you shouldn't (which is a concern but it's solvable). The problem is that you expose the ability for a hostile client to easily suck down vast swaths of the part of the database you do expose.
How is this different from REST?
I think the OP is possibly confusing GraphQL with an ORM like Active Record. You are correct that you don't accidentally "expose" any more data than you do with REST or some other APIs. It's just a routing and payload convention. GraphQL schema and types don't have to be 1:1 with your DB or ActiveRecord objects at all.
(I'm not aware of any, but if there are actually gems or libraries that do expose your DB to GraphQL this way, that's not really a GraphQL issue)
Turbo frames solves a lot of this. https://turbo.hotwired.dev/
Multiple models managed on a single page, each with their own controllers and isolated views.
Or you can do it right and use Elixir's LiveView, from which everyone is getting inspired these days.
LiveView is the brainchild of Chris McCord. He did the prototype on Rails before getting enamoured by Elixir and building Phoenix to popularize the paradigm.
LiveView is amazing and so is Phoenix but Rails has better support for building mobile apps using Hotwire Native.
Graphql is nice but there are all sorts of weird attacks and edge cases because you don't actually control the queries that a client can send. This allows a malicious client to craft really time expensive queries.
So you end up having to put depth and quantity limits, or calculating the cost of every incoming query before allowing it. Another approach I'm aware of is whitelisting but that seems to defeat the entire point.
I use rest for new projects, I wouldn't say never to graphql, but it brings a lot of initial complexity.
I don't understand why you consider this to be a burden. The gateway will calculate the depth / quantities of any query for you, so you're just setting a config option. When you create a REST API, you're making similar kinds of decisions, except you're baking them bespokely into each API.
Query whitelisting makes sense when you're building an API for your own clients (whom you tightly control). This is the original and most common usecase for graphql, though my personal experience is with using it to provide 3rd party APIs.
It's true that you can't expect to do everything identically to how you would have done it with REST (authz will also be different), but that's kind of the point.
A malicious user who had the knowledge and ability to craft expensive GraphQL queries could just as easily use that knowledge to tie your REST API in knots by flooding it with fake requests. Some kind of per-user quota system is going to be required either way.
I have actually had a different experience. I feel like I've run into "we can't just see/edit the thing" more often than "we want another thing here" with users. Naming a report is the kiss of death. "Business Report" ends up having half the data you need, rather than just a filterable list of "transactions" for example.
However, I'm biased. A lot of my jobs have been writing "backoffice" apps, so there's usually models with a really clear identity associated to them, and usually connected to a real piece of paper like a shipment form (logistics), a financial aid application (edtech), or a kitchen ticket (restaurant POS).
Those sorts of applications I find break down with too many "Your school at a glance" sort of pages. Users just want "all the applications so I can filter to just the ones who aren't submitted yet and pester those students".
And like many sibling comments mention, Rails has some good answers for combining rest entities onto the same view in a way that still makes them distinct.
This is a very common pattern and one that’s been solved in Rails by building specialized controllers applying the CRUD interface to multiple models.
Like the Read for a dashboard could have a controller for each dashboard component to load its data or it could have one controller for the full dashboard querying multiple models - still CRUD.
The tight coupling is one of many approaches and common enough to be made default.
You can separate the view and the backend storage without going graphql. You can build your API around things that make sense on a higher level, like "get latest N posts in my timeline" and let the API endpoint figure out how to serve that
It's seemingly more work than graphql as you need to actually intentionally build your API, but it gets you fewer, more thought-out usage patterns on the backend that are easier to scale.
The Rails support for multi-model, nested form updates is superb.
Separate entities on the backend - a unified update view if that’s what’s desired.
No need for any outside dependencies.
You should checkout phoenix liveview. you can maintain a stateful process on the server that pushes state changes to the frontend. its a gamechanger if you're building a webapp.
https://www.youtube.com/watch?v=aOk67eT3fpg&ab_channel=Theo-...
Isn’t this there bff stacks show their worth? As in those nextjs apps that sit between react and rails?
Not really, then you're just shifting the complexity from the front-end back to a middle man. Now it still exists, and you still have all the network traffic slowing things down, but it lives in its own little service that your rails devs aren't going to bother thinking about or looking at optimizing.
Much better to just do that in rails in the first place.
Rails is set up for that, but it doesn't force you to build like that. You're free to build in other patterns that you design yourself. It's nice to have simple defaults with the freedom to opt into more complexity only if and when you need it.
> I really like web apps that are just CRUD forms.
I really like easy problems too. Unfortunately, creating database records is hardly a business. With a pure CRUD system you're only one step away from Excel really. The business will be done somewhere else and won't be software driven at all but rather in people's heads and if you're lucky written in "SOP" type documents.
I actually believe that most of useful real-world software is “one step away from Excel”, and that’s fine
Yeah, I agree.
Too many degrees of freedom can degrade an experience, if not used properly.
Why is the ruby/rails community so weird. Half of us just quietly make stuff, but the other half seems to need to sporadically reassure everyone that it's not dead, actually.
> Rails has started to show its age amid with the current wave of AI-powered applications.
Not everything needs to have bloody AI.
> Why is the ruby/rails community so weird. Half of us just quietly make stuff, but the other half seems to need to sporadically reassure everyone that it's not dead, actually.
Half the net merrily runs on PHP and jQuery. Far more if you index on company profitability.
> Not everything needs to have bloody AI.
Some things are an anti-signal at this point. If a service provider starts talking about AI, what I hear is that I'm going to need to look for a new service provider pretty soon.
A former customer of mine is creating AI apps with Rails. After all what one is those apps need is to call an API and output the results. Rails or any other system are more than capable of doing that.
Based on what I've seen from job postings in the US, you can't start a company in healthcare right now unless you've got AI featuring prominently.
Sadly, I'm not even talking cool stuff like imaging (though it's there too), but anything to do with clinical notes to insurance is all AI-ified.
Truly, it is the new crypto-web3 hype train, except there'll be a few useful things to come out of it too.
Yes now at doctors offices you have the option to sign an agreement for the doctor to wear a microphone to record the conversation and then AI tool automatically creates a report for the doctor. AI and all aspects of medicine seem to be merging.
This kind of thing scares me knowing how bad AI meeting and document summaries are, at least what I’ve used. Missing key details, misinterpreting information, hallucinating things that weren’t said…boy I can’t wait for my doctor to use an AI summary of my visit to incorrectly diagnose me!
> Not everything needs to have bloody AI.
And even if it did, the Ruby eco-system has AI stuff...
ankane to the rescue, as normal
True hah. Of course even if they didn't already most AI libs are actually C++ libs that Python interfaces with, and Ruby has probably the best FFI of any language.
It's very interesting to note that you can build and maintain meta web framework like RoR with Ruby, Django and even D language.
Go and Rust are amazing languages, but why can’t they produce a Rails-like framework?
Is it just a matter of time before Go/Rust create a Rails-like framework, or is something fundamental preventing it?
Perhaps this article by Patrick Li (author of Stanza language) has the answers [1].
[1] Stop Designing Languages. Write Libraries Instead:
Loco is worth keeping an eye on for Rust: https://loco.rs/
The Go community is more framework-averse, preferring to build things around the standard library and generally reduce third-party dependencies. Go also tends to be used more for backends, services and infrastructure and less for fullstack websites than Ruby/Python/PHP/C#.
Or if you want more Next.JS like, but still fullstack framework there is https://leptos.dev/ and https://dioxuslabs.com/. Maybe dioxus being much more ambitious in its scope (not just web).
> Is it just a matter of time before Go/Rust create a Rails-like framework
The key to Rails is the Ruby language, it's very flexible. Someone were able to meta program Ruby code to the point that it was able to run JS code.
I would love to see RoR/Django but in Julia. Performance and easy to read code.
I'm wondering this too. Lots of rust web promises, but so far all we have are Flask-likes, and statements-of-intent. Give it time?
If you ask about this in rust communities online, they will tell you they don't want something like this, that Actix etc do everything they need. I'm baffled! Maybe they're making microservices, and not web-sites or web apps?
Dynamically-typed scripting languages like Ruby and Python are well suited to a lot of the kinds of patterns used by the "easy" web frameworks. Once you get into a statically typed, compiled language, the language itself is oriented towards up-front formality which make various "convention-oriented" patterns awkward and ill-fitting to the language.
> Rails has started to show its age amid with the current wave of AI-powered applications. It struggles with LLM text streaming, parallel processing in Ruby
Not at all my experience, actually it was incredibly easy to get this working smoothly with hotwire and no javascript at all (outside the hotwire lib).
We have a Rails app with thousands of users streaming agentic chat interfaces, we've had no issues at all with this aspect of things.
Agree. What Rails actually lacks is thousands of ready-made boilerplates that everyone and their grandma can use to spin a chat interface. Any programmer worth his salt should be able to write his own.
The real problem is programmers that understand how a computer works end-to-end is becoming increasingly rare, and possibly accelerated by the adoption of LLMs.
A lot of them prefer to write Ruby because it is simply the most beautiful language they know of. Technical details are merely a formality expressed in code that borders art.
I was under the impression the industry was collectively moving in that direction, but then bootcamps ushered in a new era of midwit frontend developers hell bent on reinventing the wheel from scratch (poorly).
I've done all of the above in Hotwire. It really is a fantastic tool.
I'd rate it as about 90%-ish of what react gives you at 5-10% of the development effort. React sites can definitely be nicer, but they are so much more work.
This has been my experience as well. Hotwire is actually a more pleasant experience than React.
React is a good choice if you’ve got a huge dev team that can split things into components and independently work on things but otherwise React is so full of footguns that it’s almost comical that people choose it for anything other than bloated VC projects.
I wonder how it compares to Svelte for people. I weighed both but Svelte didn’t require me to learn Ruby (as much as I’m sure I’d enjoy it).
> but Svelte didn’t require me to learn Ruby
You can use HotWire with any language/framework you want.
> Rails has started to show its age amid with the current wave of AI-powered applications. It struggles with LLM text streaming, parallel processing in Ruby, and lacks strong typing for AI coding tools. Despite these constraints, it remains effective.
A plug for Django + gevent in this context! You have the Python type system, and while it's inferior to TypeScript's in many ways, it's far more ubiquitous than Ruby's Sorbet. For streaming and any kind of IO-bound parallelism, gevent's monkey-patches cause every blocking operation to become a event-loop yield... so you can stream many concurrent responses at a time, with a simple generator. CPU-bound parallelism doesn't have a great story here, but that's less relevant for web applications - and if you're simultaneously iterating on ML models and a web backend, they'd likely run on separate machines anyways, and you can write both in Python without context-switching as a developer.
If you want something more similar to Next.JS but in the python world, now you have https://fastht.ml/, which also has a big performance benefit over Django. Hahaha, same as Next.JS over Rails, because it is much more bare bones. But I would say that fasthtml has the advantage of being super easy to integrate more AI libraries from the python world.
now that was a crazy rabbit hole
> You have the Python type system, and while it's inferior to TypeScript's in many ways, it's far more ubiquitous than Ruby's Sorbet.
I'm a big fan of Ruby, but God I wish it had good, in-line type hinting. Sorbet annotations are too noisy and the whole thing feels very awkwardly bolted on, while RBS' use of external files make it a non-starter.
Do you mean Ruby lacks syntactic support for adding type annotations inline in your programs?
I am one of the authors of RDL (https://github.com/tupl-tufts/rdl) a research project that looked at type systems for Ruby before it became mainstream. We went for strings that looked nice, but were parsed into a type signature. Sorbet, on the other hand, uses Ruby values in a DSL to define types. We were of the impression that many of our core ideas were absorbed by other projects and Sorbet and RBS has pretty much mainstream. What is missing to get usable gradual types in Ruby?
My point isn't technical per se, my point is more about the UX of actually trying to use gradual typing in a flesh and blood Ruby project.
Sorbet type annotations are noisy, verbose, and are much less easy to parse at a glance than an equivalent typesig in other languages. Sorbet itself feels... hefty. Incorporating Sorbet in an existing project seems like a substantial investment. RBS files are nuts from a DRY perspective, and generating them from e.g. RDoc is a second rate experience.
More broadly, the extensive use of runtime metaprogramming in Ruby gems severely limits static analysis in practice, and there seems to be a strong cultural resistance to gradual typing even where it would be possible and make sense, which I would - at least in part - attribute to the cumbersome UX of RBS/Sorbet, cf. something like Python's gradual typing.
Gradual typing isn't technically impossible in Ruby, it just feels... unwelcome.
None of my customers ever asked for type definitions in Ruby (nor in Python.) I'm pretty happy of the choice of hiding types under the carpet of a separate file. I think they made it deliberately because Ruby's core team didn't like type definitions but had to cave to the recent fashion. It will swing back but I think that this is a slow pendulum. Talking about myself I picked Ruby 20 years ago exactly because I didn't have to type types so I'm not a fan of the projects you are working at, but I don't even oppose them. I just wish I'm never forced to define types.
I for one really like RBS being external files, it keeps the Ruby side of things uncluttered.
When I do need types inline I believe it is the editor's job to show them dynamically, e.g via facilities like tooltips, autocompletion, or vim inlay hints and virtual text, which can apply to much more than just signatures near method definitions. Types are much more useful where the code is used than where it is defined.
I follow a 1:1 lib/.rb - sig/.rbs convention and have projection+ files to jump from one to the other instantly.
And since the syntax of RBS is so close to Ruby I found myself accidentally writing things type-first then using that as a template to write the actual code.
Of note, if you crawl soutaro's repo (author of steep) you'll find a prototype of inline RBS.
+ used by vim projectionist and vscode projection extension
Django shouldn't even require gevent - Django's ASGI support has been baking for a few releases now and supports async views which should be well suited to proxying streams from LLMs etc.
Relevant:
- https://fly.io/django-beats/running-tasks-concurrently-in-dj...
- https://blog.pecar.me/django-streaming-responses
(Reminds me I should try that out properly myself.)
then you have to rewrite your whole app to use asyncio keywords and colored ORM methods. A gevent monkey patch, or eventually nogil concurrency makes a lot more practical sense.
You don't have to rewrite your whole app - you can continue using the regular stuff in synchronous view functions, then have a few small async views for your LLM streaming pieces.
I've never quite gotten comfortable with gevent patches, but that's more because I don't personally understand them or what their edge cases might be than a commentary on their reliability.
Last time I checked, simply running Django through the ASGI interface dealt you a baseline perf hit.
Just move to Elixir. Phoenix is Rails-like enough and the platform is excellent for parallelisation, clustering in specific hardware and so on.
And the switch is rather easy. I've been writing elixir for nearly 10 years, rails before that, and have overseen the "conversion" of several engineers from one to the other.
Generally I'd say any senior rails dev, given the right task, can write decent elixir code on their first day. There are a lot fewer foot guns in elixir and Phoenix, and so other than language ergonomics (a simple rule that doesn't stretch far but works at the beginning is use pipe instead of dot), there's minimal barriers
Honest question from someone working on a non-negligible Rails codebase: what would be my gains, were I to switch to Elixir?
I've watched Elixir with much interest from afar, I even recently cracked open the book on it, but I feel like my biggest pain points with Ruby are performance and lack of gradual typing (and consequent lack of static analysis, painful refactoring, etc), and it doesn't really seem like Elixir has much to offer on those. What does Elixir solve, that Ruby struggles on?
Performance of what, exactly? Hard to beat the concurrency model and performance under load of elixir.
Elixir is gaining set theoretic type system, so you are showing up at the right time. https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.h...
> Performance of what, exactly? Hard to beat the concurrency model and performance under load of elixir.
The performance of my crummy web apps. My understanding is that even something like ASP.NET or Spring is significantly more performant than either Rails or Phoenix, but I'd be very happy to be corrected if this isn't the case.
I appreciate the BEAM and its actor model are well adapted to be resilient under load, which is awesome. But if that load is substantially greater than it would be with an alternative stack, that seems like it mitigates the concurrency advantage. I genuinely don't know, though, which is why I'm asking.
> Elixir is gaining set theoretic type system, so you are showing up at the right time. https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.h...
Neat! Seems clever. Looks like it's very early days, though.
Some of the big performance wins don’t come from the raw compute speed of Erlang/Elixir.
Phoenix has significantly faster templates than Rails by compiling templates and leveraging Erlang's IO Lists. So you will basically never think about caching a template in Phoenix.
Most of the Phoenix “magic” is just code/configuration in your app and gets resolved at compile time, unlike Rails with layers and layers of objects to resolve at every call.
Generally Phoenix requires way less RAM than Rails and can serve like orders of magnitude more users on the same hardware compared to rails.
The core Elixir and Phoenix libraries are polished and quite good, but the ecosystem overall is pretty far behind Rails in terms of maturity. It’s manageable but you’ll end up doing more things yourself. For things like API wrappers that can actually be an advantage but others it’s just annoying.
ASP.NET and Springboot seem to only have theoretical performance, I’m not sure I’ve ever seen it in practice. Rust and Go are better contenders IMO.
My general experience is Phoenix is way faster than Rails and most similar backends and has good to great developer experience. (But not quite excellent yet)
Go might be another option worth considering if you’re open to Java and C#
There's three reasons to choose elixir or perhaps any technology
The community and it's values, because you enjoy it, because the technology fits your use case. Most web apps fit. 1 and 2 are personal and I'd take a 25% pay cut to not spend my days in ASP or Spring, no offense to those who enjoy it.
Normally "switch languages" isn't great advice, but in this case I think it's worth considering. I have heard people coming from Django and Rails background describe Elixir as "a love child between python and ruby". Personally I love it
But does Elixir come with a whole scientific computing ecosystem?
To add to what others mentioned, there’s also https://github.com/livebook-dev/pythonx which embeds a Python interpreter into your elixir program.
Not to the same degree that Python does (then again no other general-purpose language does!), but it does have the start of one and it's fairly cohesive.
Fortran? R? C? C++? Even Java may occasionally make a good showing here (depending on what you are doing).
Having seen... things... unless it's written by people with the right skillset (and with funding and the right environment), that it exists doesn't mean you should use it (and the phrase "it's a trap" comes to mind sadly). https://scicomp.stackexchange.com/a/10923/1437 applies (and note I still wouldn't call Julia mainstream yet), so while I'm not saying people shouldn't try, the phrase "don't roll your own crypto" applies just as much to the numeric and scientific computing fields.
Must your statistical computing ecosystem comingle with your web interface?
You can split it off and have your Python code be an API you call, but now you have at least two languages involved (Python+Elixir, plus JS somewhere, plus the possible mix of C/C++/Fortran/Rust(maybe?)). Given Ruby on Rails was mentioned, just using Django seems similarly like the least risky thing to do (this all assumes you are doing numerical stuff, not just a standard CRUD app).
I'm not super tuned into the scientific computing ecosystem, so not sure if this is what you mean. But maybe? Elixir's Numerical Elixir projects seem very relevant for scientific computing. Check 'em out: https://github.com/elixir-nx
Edit: Hah! aloha2436 beat me to the answer. Sorry for the repetition.
And if you want to make the move, I know a great resource: http://PhoenixOnRails.com
</shamelessselfplug>
I personally think Elixir is a great language, but the jump from ruby to functional programming is big enough that I'm not sure it's useful general advice.
Also, the size of the elixir community and the libraries available is completely dwarfed by rails. Elixir, Phoenix, all the core stuff is really high quality, but in many cases you might doing more work that you could have just pulled from a gem in Ruby. It's unfortunate IMO. It's an underrated language.
Very much this.
I think the community tends to overestimate the ecosystem’s maturity which is one of the big things holding it back, both because it blinds the community to areas that need improvement and leads to bigger shocks when newcomers do unexpectedly run into the rough edges.
RoR is a beast, it has its place. The issue we have today is that everything is to fast paced, so fast that people feel the need to follow the latest and greatest, or they will be left behind.
This has (in my opinion) lead to a false sense that if something is not hyped as often, then its not used either.
What do you mean "left behind"? Are you saying people will actually gt "left behind" or just that people will _feel_ like they're left behind?
At this poitn you can find tools that can make demos easier to build or get you further in a hackathon, but Rails embodies "Slow is steady and steady is fast." If you're trying to build something that will stick around and can grow (like a startup outside of the latest VC crazes) then Rails will arguably do better at keeping your tools relevant and supported in the long run. That is, assuming you're building something that needs a steady backend for your application.
> At this point you can find tools that can make demos easier to build or get you further in a hackathon.
I don't understand this at all. ruby on rails is probably peak technology for getting something up an running fast at a hackathon. its a very streamlined experince with a ton of drop in plugins for getting to the product part of the mvp. Maintaining a ruby app is a nightmare overtime. At least it was 5 years ago the last time I worked fulltime in a startup using ruby on rails.
These days I use elixir. its higher performance and reasonably fast to write in but I woudln't say its as productive as ruby on rails if you're competing in a hackathon.
Maintenance nightmares are a product of organizational culture, not any particular framework.
The language encourages metaprogramming, and disencourages typing. This makes maintenance much more complicated when compared to other languages such as Python, typescript or PHP.
Any language can get you a maintenance nightmare, but a lack of types and a monolith will get you there faster.
Nothing in ruby forces you to make it a monolith of course, but the lack of types hurts
> What do you mean "left behind"? Are you saying people will actually get "left behind" or just that people will _feel_ like they're left behind?
Feel.
Nah, RoR failed because nobody wants to write code in an untyped, monkey-patch-friendly language anymore.
I don't really know if I would agree on saying that RoR failed, from recent my experiences, it's still a sought after tool for startups.
I do share your opinion on the untyped part, it's a bit of a bummer but there are gems to Ruby that helps with that.
Regarding the monkey patches, it's a concern many have and because of that, there is now a cleaner way of doing it! It's called a refinement. It's like monkey patching but in a more controlled way where you don't affect the global namespace.
https://docs.ruby-lang.org/en/master/syntax/refinements_rdoc...
Heh, maybe us engineers need to be better disciplined about what "greatest" is.
> maybe us engineers
I’ve started qualifying such statements… “you mean a real engineer or just a software developer?
You mean that CSS engineers is not a true title!?
> lacks strong typing for AI coding tools
I've heard this criticism a few times – the fear that LLMs will be bad at Rails because there's no types – and I don't think it's accurate.
At least in my experience (using the Windsurf IDE with Claude 3.5 Sonnet) LLMs do a very good job in a Rails codebase for stuff like "I want to create a new page for listing Widgets, and a Create page for those Widgets. And then add pagination.". I've been able to spin up whole new entities with a model/view/controller and database migration and tests, styled with tailwind.
I think the reason strong types don't matter as much as we might assume is because Rails has very strong conventions. Routing lives in routes.rb, controllers go under app/controllers, most controllers or models will look very similar to other ones, etc.
Type information is something that has to be presented to the LLM at runtime for it to be accurate, but convention-over-configuration is stuff that it will have picked up in training data across thousands of Rails apps that look very similar.
On top of that, the core Rails stuff hasn't drastically changed over time, so there's lots of still-accurate StackOverflow questions to train on. (as opposed to something like Next.js which had a huge upheaval over app router vs pages router, and the confusion that would cause in training data).
In my opinion the future of LLM-aided Rails development seems pretty bright.
You make some good points, but I think as AI continues progressing down the road of "reasoning", any data points that allow it to reason more will be helpful, including (and maybe especially) types. AI could definitely reason about rails too, and perhaps it will quickly get really good at that (especially if it "understands" the rails source code) but it's hard to think of a situation in which less data is more useful than more data
I think types can help, but I don't think they're "strong" enough to overrule training data.
I just ran into an example of this trying to get it (Windsurf + Claude) to "rewrite this Objective C into Rust using the objc2 crate". It turns out objc2 made some changes and deprecated a bunch of stuff.
It's not figuring stuff out from base principles and reading the types to write this code, it's just going off of all the examples it was trained on that used the old APIs, so there's lots of errors and incorrect types being passed around. Hopefully it'll keep getting better.
I suspect long-term LLMs spell the end of typed language popularity in most application programming contexts.
I agree with The Grug Brained Developer (https://grugbrain.dev/) that “type systems most value when grug hit dot on keyboard and list of things grug can do pop up magic. this 90% of value of type system or more to grug”.
This already is being heavily replaced by LLMs (e.g. copilot) in many people’s workflows. Co-pilot’s suggestions are already mostly higher level, and more useful, than the static typing auto-complete.
I believe the quality-of-autocomplete gap between typed and untyped languages has already largely converged in 2025. Co-pilot today writing TypeScript just doesn’t produce overwelmingly better auto-complete results than JavaScript. Compare with 4 years ago, where Javascript auto-complete was trash compared with TS. And even then, people argued the merits of untyped: all else being equal, less is more.
What happens when “all else” IS equal? ;-)
Currently, static typing can help the LLM generate its code properly, so it has value in helping the LLM itself. But, once the LLM can basically hold your whole codebase in its context, I don’t see much use for static typing in implementing the “hit dot on keyboard, see list of things you can do” advantage. Essentially, the same way type inference / auto lets languages skip repetitive specification typing, by holding your whole codebase in memory the LLM can mostly infer the type of everything simply by how it is called/used. LLMs take type inference to the next level, to the degree that the type barely needs to be specified to know “press ., see what you can do”
I rarely use the static typing type of auto-completion when programming now, almost everything I accept is a higher level LLM suggestion. Even if that’s not true for you today, it might be tomorrow.
Is the remaining 10% of “formal correctness” worth the extra volume of characters on the screen? I suspect Rust will do well into the distant LLM future (used in contexts where formal correctness is relatively important, say kernels), and I suspect TypeScript will decrease in popularity as a result of LLMs.
> Is the remaining 10% of “formal correctness” worth the extra volume of characters on the screen?
Yes, if only just for the ease of large-scale refactorings. And "extra volume of characters" is very limited if you use a modern language. In Haskell you could even not write a single type annotation in all of your code, although that's not recommended.
I doubt most people who like static types only do so because of autocomplete.
So we have an LLM code scaffold repo we use in a large (2m loc) production Rails codebase and it works amazingly well.
Rails and especially Ruby lends itself to describing business logic as part of source code closer to natural language than a lot of typed languages imo and that synergizes really well with a lot of different models and neat LLM uses for code creation and maintenance.
Interesting! What sort of stuff goes in the scaffold repo? Like examples of common patterns?
Definitely agree I think Ruby's closeness to natural language is a big win, especially with the culture of naming methods in self-explanatory ways. Maybe even moreso than in most other languages. Swift and Objective C come to mind as maybe also being very good for LLMs, with their very long method names.
it's fairly bespoke, but some examples:
ETL pipelines, we catalogue and link our custom transformers to bodies of text that describes business cases for it with some examples, you can then describe your ETL problem in text and it will scaffold out a pipeline for you.
Fullstack scaffolds that go from models to UI screen, we have like a set of standard components and how they interact and communicate through GraphQL to our monolith (e.g. server side pagination, miller column grouping, sorting, filtering, PDF export, etc. etc.). So if you make a new model it will scaffold the CRUD fully for you all the way to the UI (it get's some stuff wrong but it's still a massive time save for us).
Patterns for various admin controls (we use active admin, so this thing will scaffold AA resources how we want).
Refactor recipes for certain things we've deprecated or improved. We generally don't migrate everything at once to a new pattern, instead we make "recipes" that describe the new pattern and point it to an example, then run it as we get to that module or lib for new work.
There are more, but these are some off the top of my head.
I think a really big aspect of this though is the integration of our scaffolds and recipes in Cursor. We keep these scaffold documents in markdown files that are loaded as cursor notepads which reference to real source code.
So we sort of rely heavily on the source code describing itself, the recipe or pattern or scaffold just provides a bit of extra context on different usage patterns and links the different pieces or example together.
You can think of it as giving an LLM "pro tips" around how things are done in each team and repo which allows for rapid scaffold creation. A lof of this you can do with code generators and good documentation, but we've found this usage of Cursor notepads for scaffolds and architecture is less labour intensive way to keep it up to date and to evolve a big code base in a consistent manner.
---
Edit: something to add, this isn't a crutch, we require our devs to fully understand these patterns. We use it as a tool for consistency, for rapid scaffold creation and of course for speeding up things we haven't gotten around to streamlining (like repetitive bloat)
I've found LLMs are pretty good at generating basic code for everything except the specs/tests when it comes to Rails. Lot of my work lately has been like 4x more time w/ specs/tests than actually creating the application code because LLM just isn't cutting it for that part.
> At least in my experience (using the Windsurf IDE with Claude 3.5 Sonnet) LLMs do a very good job in a Rails codebase for stuff like "I want to create a new page for listing Widgets, and a Create page for those Widgets. And then add pagination.". I've been able to spin up whole new entities with a model/view/controller and database migration and tests, styled with tailwind.
Does it suggest using rails generators for this - and/or does it give you idiomatic code?
The last time I tried this it created idiomatic code from scratch. I prompted it in phases though, and I suspect if I had asked it for more at once it might've used a generator.
I've noticed that, in agent workflows like Cursor, they're able to use built-in type checkers to correct errors.
With Ruby, it doesn't have as much information, so it has to rely on testing or linters for feedback.
I haven't seen it run into a ton of issues like this when it can see all of the files, but I did hit issues where it would make guesses about how e.g. the Stripe gem's API worked and they'd be wrong.
Overall with Rails though, testing has always been pretty important partly because of the lack of types. I have noticed Windsurf is pretty good at taking a controller or model and writing tests for it though!
In Elixir land we have Instructor. It hits AI endpoints cleanly, and then validates the returned JSON using Ecto Changesets. Very powerful, clean abstraction. Love it!
https://hexdocs.pm/instructor/Instructor.html
Someone in Rails land could build similar and voila.
It’s interesting to see how convention over configuration had its hay-day in the 2010s. Angular, EmberJS, Django, and Rails were very, very popular. Now, the new type of modern stack, e.g. React/NextJS with bespoke backends consisting of things like NodeJS spaghetti with express seem to have a lot of traction.
I base the above assertion mainly on looking at Who’s Hiring posts btw.
sidenote - is NextJS really the best “convention over configuration” approach for react? I’d love to just use ember, but most of the community has moved to react, but I really enjoy the opinionated approach
> sidenote - is NextJS really the best “convention over configuration” approach for react? I’d love to just use ember, but most of the community has moved to react, but I really enjoy the opinionated approach
You might like Remix [0] (I do).
[0]: https://remix.run
it’s been quite a few years since I’ve worked in Rails, but I miss it sometimes. None of the other platforms ever completely replicated the functionality of a standard Rails environment circa 2009, so we reinvent the wheel every time. Basic stuff, too: ORM hooks, validations. It’s always a relief when I get to work with someone who has also worked on Rails before, because it means we have a shared vocabulary - there’s no equivalent thing among Python programmers, or JVM programmers, or anywhere else that I’m aware of
Laravel for PHP has a similar scope, community and exposure I would say.
Having mostly done agency work my whole life I have seen a lot of frameworks, in a lot of languages. Rails and Laravel are the standouts for just QoL experience, and Getting Shit Done.
They're engineered to get out of the way, and modular or extensible enough to enable deeper modifications easily.
In an era where everything and their mother is getting rewritten in Rust, surely we should be able to get a proper, fully featured, batteries included web framework out of it too. But it seems like all Rust web frameworks are either extremely low level (so low level that I don't see their value add at all), or extremely unfinished. Last I checked, even things like "named routes" or TLS were exotic features that required manual (and incompatible!) workarounds.
It's kind of fascinating to me that all the frameworks in 'slow', dynamic languages (Rails, Laravel) are feature packed and ready to roll, and everything in 'fast', static languages is so barren by comparison. The two seem almost exactly inversely proportional, in fact.
A batteries-included web framework in Rust with comparable built-in functionality to Rails, and comparable care for ease-of-use, would be game changer.
As a rustacean, I completely agree. A big chunk of the Rust ecosystem is obsessed with performance, and sees static typing as the way to achieve that. This approach generates extremely efficient code, but blows up compile times and creates a messy hell of generics (and accompanying type check errors).
I think there is a space for a more dynamic and high-level web framework in Rust, with an appropriate balance between leveraging the powerful type system and ease of use.
For JVM, Apache Causeway provides similar capabilities (in fact, even more abstracted than RonR). Full disclosure: I'm a committee on that project.
Django is exactly that for Python.
You know a language is dying when you start seeing articles like this.
"It struggles with LLM text streaming, parallel processing in Ruby[3], and lacks strong typing for AI coding tools."
What's the struggle specifically? How these general articles of opinion get to the first page of HN I'll never understand. Just random statements without anything to back them up.
Any thoughts on Inertia.js, which seems like a good solution for React + Rails? Feels like you can have your cake and eat it too.
This looks fairly lightweight and clean, but you immediately replace a large portion of the Rails ecosystem with React and will constantly need to account for that when deciding how to build your application. By sticking closer to "the Rails way" you get the support of it's massive community.
If Intertia.js development halts, then you're stuck with either a) adopting something else, or b) maintaining the tool for your own use cases. Using something like this would, imo, be closer to building a Rails app in API mode with a separated frontend than adding a new library on top of Rails.
If you just want React+Rails, the rails generator command comes with a bunch of options to set that up for you, including setting up and configuring: React/Vue/etc, a bundler like vite, typescript, tailwind.
It looks like inertia has additional features though.
im not aware of the generator supporting all that
here's what I get
`Possible values: importmap, bun, webpack, esbuild, rollup`
inertia, I think, avoids writing an api to bridge rails/react
This looks interesting. I think I'll try it out over the weekend. Thanks for sharing.
I am using Django and I do understand the sentiment.
But everything old is new again.
Today there is better tooling than ever for these tools. I am using Django with htmx + alpine.js and sending HTML instead of JSON. Breaking free from JSON REST APIs is a huge productivity boost.
Also wanted to mention Django & Python because Python is evidently doing even better in the age of AI and building back-end heavy ML apps with it is much than in Javascript land.
I feel for you. I'm a Rails developer and I recently joined a Django project... Django feels so far behind Rails... But everyone has their own preference and opinion...
What in particular? Never tried Rails so I want to know what I'm missing.
Along this vein: I learned programming ~15 years ago. On my own, as a hobby. Now it's a hobby and day job. Lots of tech churn in this time period. I've branched out into many domains.
The constant that's been with me from start to end: Django. Because it's fantastic and versatile. I still am kind of bitter the tutorial I followed wasted so much time on tangents regarding VirtualBox, Vagrant, Chef... I program most things in rust, but not web, because there is nothing there that compares.
RoR is great. Ruby just needs to grow beyond it.
I worked at a company that, when faced with the choice between rewriting its Django apps in Python 3, and rewriting them in RoR, decided to go with the latter.
Now, I didn't like that since I was on an undermanned team that had literally just started a major update of a Django site, and it arguably wasn't the right way to go business-wise, but a lot of ideas that have come into Django over the years were ideas that existed in RoR.
I'd like to see that sort of innovation happen in some of the other spaces that Python is in, if for no other reason than to prevent monoculture in those areas. There needs to be offerings for Ruby in other areas, like scientific computing, machine learning/AI, and data analysis that get the same uptake that Rails does.
> I'd like to see that sort of innovation happen in some of the other spaces that Python is in
I think the language itself is definitely good enough to support the things Python does. But Ruby lacks in things like documentation and standards. RDoc isn't great, neither is bundler and there's no alternative to PEP.
Ruby's ecosystem does have some interesting alternatives to Pythons ecosystem. Like Numo for numpy, and Jupyter also supports ruby. But for a matplotlib alternative you have to bind to gnuplot.
I still use Ruby much more than Python anyway, my scripts often don't need external packages and Ruby as a language is such a delight to work with.
In an era of microservices-and-k8s-all-the-things, Rails monoliths are a breath of fresh air. For stuff that's really performance- or latency-sensitive, tacking on a satellite service in Go or Rust works great.
This is an unfortunate comparison. I actually chose Next.js because of its similarity to Rails - it's a batteries included, opinionated framework that favors convention over configuration (though it's not sold that way since these are not the currently trending buzzwords). There's absolutely nothing preventing you from using both tools. Rails works great as an API supporting a Next.js UI.
I'd say Next.js is the opposite of a "batteries included" framework. No abstractions for ORM, background jobs, sending emails, managing attachments, web socket communication - all very basic stuff when dealing with a production application.
It is a batteries included _front end_ framework. You don't need to worry about compiling, routing, code splitting, etc. Most of the things you described should be handled by the back end service
>It is a batteries included _front end_ framework.
From the first page of Next.js docs: "Next.js is a React framework for building full-stack web applications"
> You don't need to worry about compiling, routing, code splitting, etc
IMO that's the least you'd expect from a web framework.
The back-end service being vercel, and its propietary offerings
Next.js doesn't even have authorization. What does it have? Server-side rendering? Cool.
Hey, let's be fair here: Rails also doesn't have built-in authorization. You need something like Pundit or CanCanCan if you don't want to built it yourself.
Also Rails only recently got authentication. For more than a decade you needed Devise or something else.
I mean it has a router (2 actually), and NextAuth seems to be becoming something of a standard for many Next devs.
Meanwhile.. last I checked you still had to choose how you were going to roll your own auth in rails. Are people not often just installing bcrypt and adding a users table with their password hash? Or is there a generator for all that now?
Anyway, I disagree with the idea that Next is Rails-like. Adonis is probably still the closest in the JS/node ecosystem, though Redwood might also serve a similar niche for the types of apps it works for.
Next and the other "frontend metaframeworks" (as they're called now), are certainly much closer than the most popular choices 7 or 8 years ago (often cobbling together React and Express and an ORM like Prisma, making a bunch of other decisions, and then doing a bunch of the integration work by hand)
Devise has made it easy to add auth to rails apps for many years now. More recently there is also the built in auth generator.
Right, so Devise seems like for rails it's what NextAuth is for Next? Though I don't know if there's anything equivalent to rails' code generation yet.
Do you have a suggestion for a more Rails-esque framework (maybe Django)?
If we were keeping in the JS ecosystem, there’s Redwood [0] which has been around a while*.
* not comparable to Rails or Django’s definition of “a while” but it’s quite mature.
By all means use Django if you specifically want to work in python but otherwise if you really want a Rails-esque framework why not just use full stack Rails?
You get much out of the box with Rails 8 now like deployment, caching, job queue, Hotwire, Turbo Frames and mobile.
All these features are stateful or realtime. In a cloud/serverless world, they are all separate managed services ("compute/storage separation"). That's the trade-off of Next.js, greater productivity by standing on top of more hosted dependencies. Theoretically unlimited (within datacenter limits) scaling, bottlenecked only by your credit card.
Next is definitely not "batteries included". It solves close to nothing on the backend (like all fullstack JS frameworks).
Well, not all of them [1].
DB access (drivers are automatically started, connected, and wired for use), queues, cron jobs, websockets, uploads, API helpers, simple routing, caches, indexes...
It gets ignored, but there are (sane) options. I'm quite proud of the APIs, too. Easy to learn, tidy, and everything just works.
Ok, you're right.
I was referring to the usual ones (Next, Nuxt, SvelteKit, Remix, etc).
Joytick looks cool. Besides this there's also NestJS
wouldn't using the nextjs backend / server components be far simpler and and streamlined
> Rails has started to show its age amid with the current wave of AI-powered applications.
A feature, not a bug. Rails will continue to be good for building things long after the AI boom has died down.
To go in the other direction, static site generators (SSGs) also have a place on the menu. Build locally. Host them on your favorite CDN. I personally really like Zola (Rust), inspired by Hugo (Go).
Fwiw, Next.js has a solution for that: https://nextjs.org/docs/pages/building-your-application/rend...
> Many of today's most polished products, like Linear and ChatGPT launched as Next.js applications, and treated mobile apps as secondary priorities.
Linear was started on next.js? I thought they built a custom sync engine? https://linear.app/blog/scaling-the-linear-sync-engine
I feel like this article is hyping up the importance of next.js.
The data layer is an orthogonal choice to the frontend framework / library.
You could use Next.js + any API to create an application.
You could use Next.js + a sync engine to create an application.
You could use React Router + Vite + any API to create an application.
You could use React Router + Vite + a sync engine to create an application.
Isn't next.js a full stack framework though? Like can't you have it do server side rendering? https://nextjs.org/docs/app/building-your-application/data-f...
Full-stack is an overloaded term, but it used to mean "a completr solution for building a web app."
From the comment above: Next.js is the opposite of a "batteries included" framework. No abstractions for ORM, background jobs, sending emails, managing attachments, web socket communication - all very basic stuff when dealing with a production application.
Next.js solves the hard thing of server rendering + frontend hydration of JS components.
So if that's the battery that you need, pretty much nothing else has it except for Next.js.
These days, I tend to want a web framework to do the hard things for me rather than the tedious/boilerplate but simple things like email-sending.
> It became the foundation for numerous successful companies - Airbnb, Shopify, Github, Instacart, Gusto, Square, and others. Probably a trillion dollars worth of businesses run on Ruby on Rails today.
Do those companies still run their businesses on RoR? My impression was that companies started out with it, but migrated to something more robust as their traffic grew.
Airbnb, Shopify and GitHub I can say never migrated away from RoR. The others I don't know.
Shopify is actually quite active in Ruby development and famously uses the new JIT compiler.
Shopify made the new Ruby JIT compiler. [0] They're on the Rails Foundation, as is 1Password, among others.
Stripe is still in on Ruby too; they're behind the Sorbet gradual type system, and Ruby is a first-class language for working with their API.
I always hear the stereotype of companies starting on Rails and migrating later, and I think it sticks around because it makes some level of intuitive sense, but it doesn't actually appear to happen all that often. I guess successful companies don't see the need to rewrite a successful codebase when they can just put good engineers to work on optimising it instead.
[0] https://shopify.engineering/ruby-yjit-is-production-ready
Instacart had an engineer presenting at Rails World 2024 just a few months ago; they're still heavily invested in the platform.
I know for sure Shopify, Github, and Gusto are big Rails shops.
Amazon was recently hiring for Rails to work on a podcast app they bought, too.
Yep, I’m at Amazon, now working on said podcasting subsidiary, ART19. We’re a rails shop.
There are a few acquisition companies and teams using Rails inside Amazon.
Rails makes me really appreciate the dictatorial nature of it (dhh). Compared to the free for all landscape in javascript, Rails moves a lot slower, but decisively.
I started using rails in 2014, and I think this is some of the most exciting days in rails. Hotwire, Turbo, Stimulus + no build JS pushed the framework into what feels like next generation web development.
While all the same patterns exist in javascript, it seems like there are 5, 6, 7, 8 ways to do everything. Something as trivial as authentication has multiple implementations online, with multiple libraries, which is hugely frustrating.
Of course it still matters.
What else would both teach programmers that nice languages exist, and that OOP leads to a nondeterministic spaghetticode hellscape? ;)
(I once spent an entire month debugging a transient login issue (session loss bug) on a million-line RoR app. Most of the bug ended up being something which merged a HashWithIndifferentAccess with a regular Hash, and key overwrite was nondeterministic. This type of bug is doubly impossible in something like Elixir- both since it is data-focused and not inheritance-focused, and because it is immutable.)
HashWithIndifferentAccess is one of these IMHO stupid design decisions of Rails that sacrifice maintainability (and consistency with Ruby, the language) just for the sake of making things slightly easier to write.
As someone who directly worked on Ruby on Rails with developers, their worflows and deployments. It is far too niche to be viable in the mainstream and provides even less incentive to newer languages.
But it is a fun language.
Also latest Python is faster than the current Ruby let that sink in. You can go even faster if you do a compiled Python like PuPu.
It's still my favourite web framework. I just wish the Ruby language had better support for type annotations (like Python does). Then it'd be sorta perfect for me
"Next.js enabled websites to approach iPhone app quality."
This is a fascinating perspective, because building PWAs with raw js and very early react I always felt these were as good as iPhone app quality.
I think Rails's big contribution is the idea of convention over configuration. Maybe this my own myopia, but Django feels like Rails, and NextJS also borrows from Rails. I've only managed one Rails project in production, and I had to come up to speed really fast to support it, but I loved it.
Starting today in what scenario would RoR would be a better option than Next.js for building web app? Assuming one has to start from 0 -> 1 .
I don't hate NextJS or anything, but I've never met a JS backend that I loved a whole lot compared to a conventional Rails one. They always turn out to be missing little details and trying to fill them in always like round hole square peg misalignment that just never quite ends.
Almost all scenarios
I outline two in the article:
1. One-person software project
2. Complex enterprise app with lots of tables, like a vendor management system.
I mean, any scenario? I'm not trying to be snarky but server-side Javascript has always been a weird code smell from first premise. Now, when to use RoR vs a lighter-weight framework like Sinatra is a more interesting question, but it's about what you need out of the box.
Server-side JS is fine, and actually very nice in some contexts. The language and runtime(s) have come a long way.
But anyone who tries it without really understanding JS is eventually going to have a bad time. It’s important to know how to work with the event loop, how to properly use promises, etc. Server-side JS is a lot more unforgiving than front-end JS when it comes to these concepts.
I wouldn't say Rails is the most simple and abstracted way to build a web application. More so than Next.js, yes, but there are both older and newer technologies that keep things simpler.
Such as?
Simple and abstract are subjective, but for me: PHP, jQuery, htmx, Electric Clojure.
One of the biggest issues is that newer tools often lack Rails integrations. I recently built one for CKEditor - happy to share details if anyone's interested.
https://github.com/Mati365/ckeditor5-rails?tab=readme-ov-fil...
This Youtube series was doing some cool things integrating TipTap, but never finished:
I so wish we had Rails for JavaScript. Many have tried but no equivalent exists.
TBH I've started to like the GraphQL ruby layer in Rails projects as it creates a pretty clean boundary that works well with boilerplate and is more standardized than REST APIs.
And I find that the "convention based" approach lends itself well to having AI write stuff for you.
I feel the same, but for Django, even though I don’t write Python as much these days.
I love Ruby. However, based on my readings, Rust, rocket, seems like a compelling choice due to its true parallel processing capabilities, strong typing, and impressive speed. Perhaps the author has yet to explore other technologies outside of Rails.
Ruby on Rails has an amazing DX (e.g. engines). We are trying to recreate that for JS with Wasp (https://github.com/wasp-lang/wasp)
Is Next.js really that popular? What else are people building back-end applications with? Are they just NOT building back-end applications and moving to services like Next.js with function-based hybrid backends?
>It became the foundation for numerous successful companies
And after the MVP phase passed and the company became successful, they usually rewrote the software in something else.
This never happens in real life....."rewriting software" is the introverted programmer's wet dream because it gives them relevance and the idea of respect. No serious business "rewrites software in something else" once they start to take off.
You don't do it for fun*, but because the rapid development duck-typed dynamic language you used to get to MVP quickly is not the language you need to keep it working under load and a growing feature set.
It's a terrible and difficult transition that makes you question if the first language was really such a good choice after all, although it did get you where you are right now, which is more than you can say for a bunch of companies trying to do everything future proof from day 0
(* well, some people do, but they don't tend to survive)
I can point to plenty of companies that have rewritten products at scale. That said, specifically relevant to the article, I believe Shopify and GitHub continue to run Ruby on Rails.
> Next.js now serves as the most common tool for building a startup.
This is completely unfounded.
The number of crypto exchanges and news paper I've seen that run on Nuxt.js
Because cyrpto exchanges and newspapers make up the majority of startups? Most scams don't advertise themselves as startups and most newspapers are just dying and going out of business, not rebranding as startups.
If you normalize for for market cap, I think it's a reasonable assumption. But, yeah - maybe it's a bit inflated.
It does not when ASP.NET Core exists. UX just as good or even better, 10x performance.
After all these years, rails is still my favorite framework to build with. Although I have become increasingly bored/frustrated with the front-end development in rails, which lacks a solid rails-way.
> front-end development in rails, which lacks a solid rails-way.
Hotwire/Turbo/Stimulus with import maps is the prescribed way. Tailwind is emerging as the preferred CSS lib.
Have you checked out Hotwired? https://hotwired.dev
This book has a good intro with more advanced patterns: https://railsandhotwirecodex.com/
I haven't actually used RoR, but I've used Django extensively and understand they are fairly similar. How do people build things that aren't just CRUD? Django calls itself a "web framework" but I think that's wrong, it's a CRUD app framework. Is RoR the same?
The main problem I have is the mixing up of low-level logic like web and database etc with high-level logic (ie. business rules). The easy path leads to a ball of mud with duplicated business rules across views and forms etc. How are people dealing with this? Does RoR fit into a larger application architecture where it does just the CRUD part and some other systems take over to do the business part?
It always seems to start well, you have your models, and views just doing CRUD stuff. But then someone says "I don't want to have to create an author before I create a book, let me enter new author details when I enter a book", and then the whole thing breaks. You need some logic somewhere to create authors but only in certain cases and of course the whole thing needs to be one transaction etc. Then you end up basically undoing all the simple views you did and essentially fighting the system to handle logic it was never designed to handle.
In essence, these systems make the easy stuff easy and the hard stuff even harder.
You can certainly write complex applications in Rails that go beyond CRUD. But in my experience (which may be outdated, I haven't written Rails in years), it requires a lot of discipline and going beyond what Rails itself offers. Sometimes you may even have to fight some of Rails's conventions.
There are some people who have tried to abstract such things into yet another framework on top of Rails, e.g. I recall Trailblazer[0], but I have no idea if anyone still uses it.
Just because of job availability, I've been a JS (Node, React, Next, etc.) dev for almost a decade now. I still feel much more productive with Rails.
Rails isn't the right tool for every job, but I find that it's the right tool more often than not.
Rails is architected really well. Many decisions don't need to be made at all, and everything has a place. Plus, it's very friendly to extensibility and has a healthy ecosystem. It's mostly about the code that I don't need to write. It's really years beyond most other frameworks. Next will get there, but it will take it another 5 years. No shade on others, but Rails is just well built software with many years of learning and improving.
For highly reactive or "dynamic" systems, it probably isn't the right tool. Building a Figma or Notion. As @graypegg said in their comment, most websites work best as "CRUD forms". Though I would have said the same about email, but Hey.com exists so YMMV...
Does Django still matter too?
I think anything that applies to RoR applies equally to Django.
RoR got there first, but Python is a more relevant PL with broad ecosystem.
I would always recommend learning Django over RoR, unless you specifically want to learn a niche language.
Yes.
COBOL still matters, too. Would I chose to start a new project with it today, in 2025? Hell naw.
"Another trillion dollars worth of companies is being built on Next.js, and these web apps are faster and more polished than what could have been built on Ruby on Rails."
This makes absolutely no sense. HTTP is HTTP. Maybe one framework makes something more convenient than the other, but more "polished"? What does that even mean and what exactly is Next.js enabling?
Not adopting SPA architecture is the MAIN mistake of DHH and RoR committee.
“Your grandpas vinyl records” as analogy for Ruby on Rails.
Love it.
No.
I only read the first sentence, but try running a Next.js app from 2 years ago.
Good luck with that.
RoR needs to distance itself from DHH to matter.
Why? That's like somebody saying Linux has to distance itself from Linus because they have some sort of grudge against them or they don't like the authoritarian position they have over the project.
Literally everything interesting about RoR is because of DHH. Hotwire, Import Maps, Kamal, etc...
The one year working on a rails project I will never get back
What a horrendous pile of garbage, and before you ask, the project was started by two Rails experts.
Company almost went under because of it before we rewrote in Flask and react and then got acquired
Your comment would be more interesting with a bit more context.
Which version of rails?
Did you work in rails for a year, then rewrite?
How long did the rewrite take?
Was it a rails monolith - server side rendering and no api?
Was the database schema good - did you keep it for the rewrite?
Was it a good fit for react?
I suspect maybe they weren't experts if Flask + React seemed to solve whatever the problems were. (Particularly since using React with Rails is fine.)
That said, I've encountered a solid number of Rails projects that have been dumpster fires because their devs didn't follow conventions AND had horrible modeling/schema issues.
(Example: let's create our own job system that's many times worse than Active Job for no real reason...)
The other recurring thing that I see come up a lot with Rails projects is that nobody can really agree on where to put their business logic for anything complex.
I once had to fix a Rails project where the original developer chose not to use ActiveRecord, ActiveJob or any of the Rails built in features. Not sure why he wanted to go that route unless he wanted to learn Ruby at the customers expense.
I haven't learned PHP nor Ruby but if I had to go with one it would be PHP. Seems like I'd get the most utility out of it and prospects in my area.
In fact, I've never seen a RoR job posted in my area. Mostly PHP, Python and JS.
But you know, you gotta give credit where credit is due. Laravel would not be a thing if it weren't for RoR. RoR was incredibly influential.
Ok, so maybe I exaggerated, I've dabbled in these languages. Not really formally learned them. Or at least .... I can't say I've learned them if I haven't done any paid work with them.
> In fact, I've never seen a RoR job posted in my area. Mostly PHP, Python and JS.
The trick is to get jobs where people don't care about your tools but only the results. And then you can use whatever tools you want. Did that long ago with Django when there was literally no Python jobs in my area.