To be fair, if my search engine is anything to go on, about 0.5-1% of the requests I get are from human sources. The rest are from bots, and not like people who haven't found I have an API, but bots that are attempting to poison Google or Bing's query suggestions (even though I'm not backed by either). From what I've heard from other people running search engines, it looks the same everywhere.
I don't know what Google's ratio of human to botspam is, but given how much of a payday it would be if anyone were to succeed, I can imagine they're serving their fair number of automated requests.
Requiring a headless browser to automate the traffic makes the abuse significantly more expensive.
If it's such a common issue, I would've thought Google already ignored searches from clients that do not enable JavaScript when computing results?
Besides, you already got auto-blocked when using it in a slightly unusual way. Google hasn't worked on Tor since forever, and recently I also got blocked a few times just for using it through my text browser that uses libcurl for its network stack. So I imagine a botnet using curl wouldn't last very long either.
My guess is it had more to do with squeezing out more profit from that supposed 0.1% of users.
Given that curl-impersonate[1] exists and that a major player in this space is also looking for experience with this library, I'm pretty sure forcing the execution of JS using DOM stuff would be a much more effective deterrent to prevent scraping.
"Why didn't they do it earlier?" is a fallacious argument.
If we accepted it, there would basically only be a single point in time where a change like this could be legitimately made. If the change is made before there is a large enough problem, you'll argue the change was unnecessary. If it's made after, you'll argue the change should have been made sooner.
"They've already done something else" isn't quite as logically fallacious, but shows that you don't experience dealing with adversarial application domains.
Adversarial problems, which scraping is, are dynamic and iterative games. The attacker and defender are stuck in an endless loop of game and counterplay, unless one side gives up. There's no point in defending against attacks that aren't happening -- it's not just useless, but probably harmful, because every defense has some cost in friction to legitimate users.
> My guess is it had more to do with squeezing out more profit from that supposed 0.1% of users.
Yes, that kind of thing is very easy to just assert. But just think about it for like two seconds. How much more revenue are you going to make per user? None. Users without JS are still shown ads. JS is not necessary for ad targeting either.
It seems just as plausible that this is losing them some revenue, because some proprortion of the people using the site without JS will stop using it rather than enable JS.
I run a not-very-popular site -- at least 50% of the traffic is bots. I can only imagine how bad it would be if the site was a forum or search engine.
I run a semi-popular website hosting user-generated content, although it's not a search engine; the attacks on it have surprised me, and I've eventually had to put in the same kinds of restrictions on it.
I was initially very hesitant to restrict any kind of traffic, relying on ratelimiting IPs on critical endpoints that needed low friction, and captchas on the higher friction with higher intents, such as signup and password reset pages.
Other than that, I was very liberal with most traffic, making sure that Tor was unblocked, and even ending up migrating off Cloudflare's free tier to a paid CDN due to inexplicable errors that users were facing over Tor that were ultimately related to how they blocked some specific requests over Tor with 403, even though the MVPs on their community forums would never acknowledge such a thing.
Unfortunately, given that Tor is a free rotating proxy, my website got attacked on one of these critical, compute heavy endpoints through multiple exit nodes totaling ~20,000 RPS. I've reluctantly had to block Tor, and a few other paid proxy services discovered through my own research since then.
Another time, a set of human spammers distributed all over the world started sending out a large volume of spam towards my website; with something like 1,000,000 spam messages every day (I still feel this was an attack coordinated by a "competitor" of some sort, especially given a small percentage of messages entitled "I want to get paid for posting" or along those lines).
There was no meaningful differentiator between the spammers and legitimate users, they were using real Gmail accounts to sign up, analysis of their behaviours showed they were real users as opposed to simple or even browser-based automation, and the spammers were based out of the same residential IPs as legitimate users.
I, again, had to reluctantly introduce a spam filter on some common keywords, and although some legitimate users do get trapped from time to time, this was the only way I could get a handle on that problem.
I'm appalled by some of the discussions here. Was I "enshittifying" my website out of unbridled "greed"? I don't think so. But every time I come here, I find these accusations, which makes me think that as a website with technical users, we can definitely do better.
The problem is accountability. Imagine starting a trade show business in the physical world as an example.
One day you start getting a bunch of people come in to mess with the place. You can identify them and their organization, then promptly remove them. If they continue, there are legal ramifications.
On the web, these people can be robots that look just like real people until you spend a while studying their behavior. Worse if they’re real people being paid for sabotage.
In the real world, you arrest them and find the source. Online they can remain anonymous and protected. What recourse do we have beyond splitting the web into a “verified ID” web, and a pseudonymous analog? We can’t keep treating potential computer engagement the same as human forever. As AI agents inevitably get cheaper and harder to detect, what choice will we have?
To be honest, I don't like initiatives towards a "verified web" either, and am very scared of the effects on anonymity that stuff like Apple's PAT, Chrome's now deprecated WEI or Cloudflare's similar efforts to that end are aimed at.
Not to say that these would just cement the position of Google and Microsoft and block off the rest of us from building alternatives to their products.
I feel that the current state of things are fine; I was eventually able to restrict most abuse in an acceptable way with few false positives. However, what I wished for was that more people would understand these tradeoffs instead of jumping to uncharitable interpretations not backed by real world experience as a conclusion.
> I'm appalled by some of the discussions here. Was I "enshittifying" my website out of unbridled "greed"? I don't think so. But every time I come here, I find these accusations, which makes me think that as a website with technical users, we can definitely do better.
It's if nothing else very evident most people fundamentally don't understand what an adversarial shit show running a public web service is.
There's a certain relatively tiny audience that has congregated on HN for whom hating ads is a kind of religion and google is the great satan.
Threads like this are where they come to affirm their beliefs with fellow adherents.
Comments like yours, those that imply there might be some valid reason for a move like this (even with degrees of separation) are simply heretical. I think these people cling to an internet circa 2002ish and the solution to all problems with the modern internet is to make the internet go back to 2002.
this is precisely how a shill would post
The problem isn’t the necessary fluff that must be added, it’s how easy it becomes to keep on adding it after the necessity subsides.
Google was a more honorable company when the ads were on the right hand side only instead of tricking you in the main results. This is the enshitification people talk about. Decision with no reason other than pure profit at user expense. They were horrendously profitable when they made this dark pattern switch.
Profits today can’t be distinguished accurately between users who know it’s an ad and those who were tricked into thinking it was organic.
Not all enshitification is equal.
My impression is that there's less effort for them to go directly to headless browsers. There are several foot guns in using a raw HTML parsing lib and dispatching HTTP requests. People don't care about resource usage, spammers even less and many of them lack the skills.
Most black hat spammers use botnets, especially against bigger targets which have enough traffic to build statistics to fingerprint clients and map out bad ASNs and so on, and most botnets are low powered. You're not running chrome on a smart fridge or an enterprise router.
Chrome is probably the worst browser possible to run for these things, so it's not the basis for comparison.
We have many smaller browsers, that run javascript, that work on low powered devices as well.
Starting from webkit and stripping down the rendering parts just to execute JavaScript and process the DOM, the RAM usage would be significantly lower.
True, but the bad actor's code doesn't typically run directly on the infected device. Typically the infected router or camera is just acting as a proxy.
There are ways to detect that and it will still require a lot of CPU and ram behind the proxies.
A major player in this space is apparently looking for people experienced in scraping without using browser automation. My guess is that not running a browser results in using far fewer resources, thus reducing their costs heavily.
Running a headless browser also means that any differences in the headless environment vs. a "headed" one can be discovered, as well as any of your Javascript executing within the page, which significantly makes it difficult to scale your operation.
My experience is that headless browsers use about 100x more RAM, and at least 10x more bandwidth and 10x more processing power, and page loads take about 10x as long time to finish (vs curl). Though these numbers may be a bit low, there are instances you need to add another zero to one or more of them.
There's also considerably more jank with headless browsers, since you typically want to re-use instances to avoid incurring the cost of spawning a new browser for each retrieval.
Is it possible to pause a VM just after the browser has started up? Then map it as copy-on-write memory and spin up many VMs from that "image".
Your comment is interesting and there are some people doing work on this although not specific to browser automation, e.g. AWS Lambda SnapStart is just them trying to boot your Java Lambda code and freeze the Firecracker MicroVM's snapshot and then starting other Lambda functions from there.
However, even with a VM approach, you tend to lose out on the fact that you can make 100s or 1000s of requests on a small box (~512 MB) every second if it's just restricted to HTTP(s). However, once you're booting up a headless browser, you're probably restricted to no more than loading 3-4 pages per second.
On the other hand you need to be able to do basics like match the headers, sometimes request irrelevant resources, handle malformed documents, catch changing form parameters, and other gotchas. Many would just copy the request from the browser console.
The change rate for Chromium is also so high that it's hard to spot the addition of code targeting whatever you are doing on the client side.
so much more expensive and slow vs just scraping the html. It is not hard to scrape raw html if the target is well-defined (like google).
I believe the main intent is to block SERP analysers, which track result positions by keywords. Not that it would help a lot with bot abuse, but will make regular SEO agency life harder and more expensive.
Last month Google have also enstricted YouTube policies which IMHO is a sign, that they are not reaching specific milestones and that'd definitely be reflected over the alphabet stocks
They are going to make Google search even more broken than it is already? Be my guest! Since they are an ads business, I guess they don't really care about their search any longer, or they have sniffed some potential to gather even more information on users using Google, if they require running JS for it to work. Who knows. But anyone valuing their privacy has long left anyway.
Just tested (ignoring AI search engines, non-english, non-free):
Search engines which require JavaScript:
Google, Bing, Ecosia, Yandex, Qwant, Gibiru, Presearch, Seekr, Swisscows, Yep, Openverse, Dogpile, Waldo
Search engines which do not require JavaScript:
DuckDuckGo, Yahoo Search, Brave Search, Startpage, AOL Search, giveWater, Mojeek
Kagi.com works without JS
Have just updated my text: "ignoring non-free" :-)
I've put off learning JavaScript for over 20 years, now I'm not going to be able to search for anything
What's next? Not working for an adtech company?
Previous discussion: Google.com search now refusing to search for FF esr 128 without JavaScript (2025-01-16, 92 points), https://news.ycombinator.com/item?id=42719865
I recently discovered how great the ChatGPT web search feature is. Returns live (!) results from the web and usually finds things that Google doesn't - mostly niche searches in natural language that G simply doesn't get.
Of course, it uses JavaScript, which doesn't help with the problem discussed here.
But I do think that Google is internally seeing a huge drop in usage which is why they're currently running for the money. We're going to see this all across their products soon enough (I'm thinking Gmail).
I've been experimenting with creating single-site browsers[1] for all websites I routinely visit, effectively removing navigational queries from search engines; between that and Claude being able to answer technical questions, it's remarkable how rarely I even use browsers for day-to-day tasks anymore (as in web views with tabs and url bars).
We've been using the web (as in documents interconnected with links between servers) for a great number of tasks it was never quite designed to solve, and the result has always been awkward. It's been very refreshing to move away from the web browser-search engine duo for these things.
For one, and it took me a while to notice what was off, but there are like no ads anymore, anywhere. Not because I use adblockers, but because I simply don't end up directed to places where there are ads. And let me tell you, if you've been away from that stuff for a while, and then come back, holy crap what a dumpster fire.
The web browser has been center stage for a long while, coasting on momentum and old habits, but it turns out it doesn't need to be, and if you work to get rid of it, you get a better and more enjoyable computing experience. Given how much better this feels, I can't help but feel we're in for a big shift in how computers are used.
[1] You can just launch 'chrome --app=url' to make one. Or use Electron if you want to customize the UI yourself.
It does not even work with javascript enabled! Always asking for some cookies permissions, captcha, Gmail login...
…and all the results are ads and seo blogspam.
Don't be evil.
Is JavaScript is now evil?
There's absolutely no need for JavaScript on a page that has a text input and two buttons and that has worked without JS for three decades. Given Google's reputation for privacy and the constant attempts at selling their users out, it's fair to assume that the reason they're requiring JavaScript is not noble.
> There's absolutely no need for JavaScript on a page that has a text input and two buttons
The whole web is evil then. Hacker news has JavaScript for simple upvote buttons, is it also evil?
HN is usable w/o JavaScript. It doesn't block my access because I choose not to allow it to execute arbitrary code on my computer.
* execute arbitrary code in one of the best studied sandboxes on the planet, which happens to be running on your computer.
> which happens to be running on your computer.
not if you turn it off
Voting works for HN without js. Just forces a page refresh.
I don't disagree with you. I use NoScript which lets me selectively enable every JS source a site has ever since marketers and advertisers have weaponized it, and you'd be surprised what you find and what works with minimal JS. If anything, it's very educational.
Okay. But is it evil?
It's a well intentioned bolt on for adding reactivity without reloading the page, but it's been hijacked by the ad industrial complex to keep tabs on your behavior for people who do not have your best interests in mind. that usage of it, I would say, qualifies for a weak definition of evil.
Yes
Whether it's evil or not is a difficult question. I'd say it's at least as bad as satan, considering we can actually confirm its existence. But that it arose naturally from this grotesque universe means it is a valid part of things. Maybe it is we who are evil and it that punishes us.
Javascript is like Flash-lite. Is it evil? No. It's great, even.
What every last commercial site uses it for IS evil, without a doubt.
It uses a lot of data, it is a security risk, it is a privacy risk, and it forces you to throw away your old devices.
how much extra data does JS on google use vs without JS? We must be talking about kb that are probably also cached.
It's literally almost an anagram of Satan's Armpits.
Javascript has always been evil.
It is in Google’s hands
Yes.
Yeah, roughly since 1996.
Always has been.
If you had to estimate percentage of the web could exist without losing functionality without using JavaScript?
Just fyi the entire browsing and checkout process of Amazon.com works fine without JavaScript, discovering that radicalized me against so called web apps. it just takes actually reading the html spec and maintaining state in the querystring or via session cookie. Latency can be lower than the monstrosities people build with react in the right circumstances.
My phones have had JavaScript off by default for years. I'm amazed by how many sites work fine and are pleasurable to use.
If you had to estimate the percentage of the malicious web without JavaScript.
I only know of JavaScript / Spectre combo are there tales of other evils lurking in the great wild?
I browse with JavaScript off by default on my phone. Guess I'm going to DuckDuckGo now.
You can also ditch Chrome by switching to DuckDuckGo browser.
Last days of Rome.
Honestly I wouldn't be surprised that if Google requires some Proof-of-work done on browser's host's CPU/GPU to validate search results and make it infeasible for bots therefore.
That brings up an interesting conundrum. If PoW were implemented, could known-valid (i.e. goodstanding for over a decade) accounts be switched over to PoS instead? Or paying accounts?
PoW could be written into infrequent pages such as the registration page and reset password page. It could run while the user fills in the form. I might implement this on some sites that get attacked.
> quality of search results
A.k.a ads
Kagi it is, then
10$/mo is way too expensive
For you?
For me, it's worth every cent.
Is kagi that good?
You can find the endless reviews:
https://hn.algolia.com/?query=kagi&sort=byPopularity
Possibly one of the most reviewed SaaS companies here.
And yes, it's pretty great. But it's just Search (with some AI tossed in).
Incidentally, Kagi works with JavaScript disabled.
I tried it, it’s fine but I prefer the mix of uBlacklist and Bing with a duck branding and ChatGPT.
Well, I read the HN headline and said to myself, I bet this requirement is pitched as "...to enhance the user experience...", and, yep, it's there.
That's akin with a response to some incident where companies "Take [user security etc.] seriously", when the immediate thought is, yeah, but if you did, that [thing] probably wouldn't have happened.
Dunno why I wrote all that - I don't use Google search, because I wanted to enhance (aka unenshitten) my search experience.
How else are you going to load a hideously incorrect AI summary block without your initial page latency being through the roof?
You could probably get it working with declarative shadow dom, streaming in the AI generated content at the end of the html document and slotting it into place. There are no doubt a lot of gotchas but at first glance it seems feasible. Here’s a demo I found of something like that: https://github.com/dgp1130/out-of-order-streaming
The example repo is a little confusing to me, since it seems to use client-side JS to demonstrate that it doesn't need client-side JS: "It bootstraps a service worker and [...] No client-side JavaScript!"
But I guess the point is that the code in the service worker could have been on the server instead?
The trick seems to be using a template element with a slot and then slotting in the streamed content at the end. But you could probably also do it using just CSS to reposition the content from the bottom to the top, similarly to how many websites handle navigation menus, assuming that the client supports CSS.
Iframes lazy
Object content as lazy
Embed lazy
Image lazy
Link rel=import (not support that widely though)
Heck if you wanted to get REALLY cute you go use multipart-mixed-replace headers.
Or SSE
now that i think about it you could do it quite nicely with svg's and foreignObject
iframes?
this is the only time in 15 years i've wished hn had a lol react. :D
Why?, iframes are Very much underappreciated
Yet another reason to stop supporting Google with your clicks. Remember when their moto was “Don’t be evil”?