There is an post describing the possibility of an organised campaign against archive.today [1] https://algustionesa.com/the-takedown-campaign-against-archi...
How does the tech behind archive.today work in detail? Is there any information out there that goes beyond the Google AI search reply or this HN thread [2]?
[1] https://algustionesa.com/the-takedown-campaign-against-archi... [2] https://news.ycombinator.com/item?id=42816427
If they're under an organised defamation campaign, they're not helping themselves by DDoSing someone else's blog and editing archived pages.
Is that, itself, true or disinformation?
I've not seen any evidence of them editing archived pages BUT the DDOSing of gyrovague.com is true and still actively taking place. The author of that blog is Finnish leading archive.today to ban all Finnish IPs by giving them endless captcha loops. After solving the first captcha, the page reloads and a javascript snippet appears in the source that attempts to spam gyrovague.com with repeated fetches.
How do you know that? Did you see it (do you have a Finnish IP?)?
It was true and visible when reported, yeah.
it gives them a voice.
And that voice is practically shouting, "I AM UNTRUSTWORTHY".
Or some shrewd sort of tactician.
that is not the worst scream (especially after FBI and Russian trail). better to shout anything than to die in silence
What kinda logic is that? If you don't want to die in silence, then shout something sensical. But if you're gonna shout garbage, just die in silence.
People say they want the old weird web back. Well there’s this.
The property of the medium: no one would repost or discuss "something sensical".
archive.today works surprisingly well for me, often succeeding where archive.org fails.
archive.org also complies with takedown requests, so it's worth asking: could the organised campaign against archive.today have something to do with it preserving content that someone wants removed?
There was also the recent news about sites beginning to block the Internet Archive. Feels like we are gearing up for the next phase of the information war.
Was that written by AI? It sounds like AI, spends lots of time summarizing other posts, and has no listed author. My AI alarm is going off.
Yeah, wow. Definitely setting off my AI summary alarm.
Ars was caught recently using AI to write articles when the AI hallucinated about a blogger getting harassed by someone using AI agents. The article quoted his blog and all the quotes were nonsense.
Yeah nearly certainly.
There are number of blog posts like
owner-archive-today . blogspot . com
2 years old, like J.P's first post on AT
They are able to scrape paywalled sites at random, so im guessing a residential botnet is used.
I don't see the point in doxing anyone, especially those providing a useful service for the average internet user. Just because you can put some info together, it doesn't mean you should.
With this said, I also disagree with turning everyone that uses archive[.]today into a botnet that DDoS sites. Changing the content of archived pages also raises questions about the authenticity of what we're reading.
The site behaves as if it was infected by some malware and the archived pages can't be trusted. I can see why Wikipedia made this decision.
For a very brief time, "doxing" (that is, dropping dox, that is, dropping docs, or documents) used to mean something useful. You gathered information that was not out in public, for example by talking to people or by stealing it, and put it out in the open.
It's very silly to talk about doxing when all someone has done is gather information anyone else can equally easily obtain, just given enough patience and time, especially when it's information the person in question put out there themselves. If it doesn't take any special skills or connections to obtain the information, but only the inclination to actually perform the research on publicly available data, I don't see what has been done that is unethical.
Call it stalking or harrasment if you prefer. Regardless its rude (sometimes illegal) behaviour.
That's no justification for using visitors to your site to do a DDOS.
In the slang of reddit: ESH
It's neither of those. Stalking refers to persistent, unwanted, one-sided interactions with a person such as following, surveilling, calling, or sending messages or gifts. Investigating a person's past or identity doesn't involve any interaction with the physical person. Harassment is persistent attempts to interact with someone after having been asked to stop. Again, an investigation doesn't require any form of interaction.
> Harassment is persistent attempts to interact with someone
No, harassment also includes persistent attempts to cause someone grief, whether or not they involve direct interactions with that person.
From Wikipedia:
> Harassment covers a wide range of behaviors of an offensive nature. It is commonly understood as behavior that demeans, humiliates, and intimidates a person.
Doxing in the loose sense could be harassment in certain circumstances, such as if you broadcast a person's home address to an audience with the intent to cause that audience to use that address, even if the address was already out there. In that case, the problem is not the release of information, but the intent you're communicating with the release. It would be the same if you told that audience "you know guys? It's not very difficult to find jdoe's home address if you google his name. I'm not saying anything, I'm just saying." Merely de-pseudonymizing a screen name may or may not be harassment. Divulging that jdoe's real name is John Doe would not have the same implications as if his name was, say, Keanu Reeves.
Because the two are distinct, one can't simply replace "doxing" with "harassment".
Generally speaking, every case I've seen of people using the term "doxing" tends to be for the case that specifically is harassment; it has the connotation of using the information, precisely because if you aren't intending to use it there's no good reason for you to have it.
That's just another way the term is used incorrectly.
Language evolves. Connotation tends to become definition. Not always the only definition, but connotation becomes the "especially" or the "definition 2", and can become the primary definition over time.
That's not what I mean. If we agree that harassment is wrong and that doxing is not harassment (because not all doxing is harassment), then it's incorrect to say that doxing is wrong. For example, the article from the blog, even if we agree that it is doxing, isn't harassment. The person being discussed is presented in a positive light:
>I for one will be buying Denis/Masha/whoever a well deserved cup of coffee.
Using one term when what is meant is actually the other serves nothing but to sow confusion.
update the etymology then on wikipedia with your reference
that current etymology is what we’re all talking about obv
Eh, you can find in public data things like "what is someone's address" based only on their name by looking up public records of mortgage records. That however is quite bad form, and if you did do that, I think it would be pretty unethical.
It's also kind of ironic that a site whose whole premise is to preserve pages forever, whether the people involved like it or not, is seeking to take down another site because they are involved and don't like it. Live by the sword, etc.
> It's also kind of ironic that a site whose whole premise is to preserve pages forever, whether the people involved like it or not
Oddly, I think archive.today has explicitly said that's not what they're there for, and the people shouldn't rely on their links as a long-term archive.
Where have they said it?
> Archive.today is a time capsule for web pages! > It takes a 'snapshot' of a webpage that will always be online even if the original page disappears.
This reddit post collects some statements: https://old.reddit.com/r/DataHoarder/comments/1i277vt/psa_ar...
What are they for then
Bypassing paywalls? It actually seems like they've got accounts at many paywalled sites. Shorter term archiving?
Given the unclear ownership situation, it makes sense not to rely on them for anything long term. They could disappear tomorrow.
Sites that exist to archive other websites will almost always need to dynamically change the content of the HTML that they're serving in some way or another. (For example, a link that points to the root of the website may need changed in order to point to the right location.)
So it doesn't necessarily raise questions about whether the content has been changed or not. The difference is in whether that change is there to make the archive usable - and of course, for archive.today, that's not the case.
> Changing the content of archived pages also raises questions about the authenticity of what we're reading.
This is absolutely the buried lede of this whole saga, and needs to be the focus of conversation in the coming age.
Did they actually run the DDoS via a script or was this a case of inserting a link and many users clicked it? They are substantially different IMO
https://news.ycombinator.com/item?id=46624740 has the earliest writeup that I know of. It was running it via a script and intentionally using cache busting techniques to try to increase load on the hosted wordpress infrastructure.
> It was running
It still is, uBlocks default lists are killing the script now but if it's allowed to load then it still tries to hammer the other blog.
Ah good to know. My pi-hole actually was blocking the blog itself since the ublock site list made its way into one of the blocklists I use. But I've been just avoiding links as much as possible because I didn't want to contribute.
Given the site is hosted on wordpress.com, who don't charge for bandwidth, it seems to have been completely ineffective.
The speculation that I saw was that they'd try to get Wordpress.com to boot him off for being a burden on the overall infrastructure.
AT answered why the DDoS and why it is still active https://lj.rossia.org/users/archive_today/2478.html
This is an impressively unhinged take. I still have no idea what the person is trying to achieve. And I'm sad we're likely going to lose that resource in the future.
As if Wordpress.com was that dumb...
Are you kidding, it's wordpress
Thank you this is exactly the information I was looking for.
"You found the smoking gun!"
they silently ran the DDoS script on their captcha page (which is frequently shown to visitors, even when simply viewing and not archiving a new page)
As far as I understand the person behind archive.today might face jail time if they are found out. You shouldn't be surprised that people lash out when you threaten their life.
I don't think the DDOSing is a very good method for fighting back but I can't blame anyone for trying to survive. They are definitely the victim here.
If that blog really doxxed them out of idle curiosity they are an absolute piece of shit. Though I think this is more of a targeted campaign.
One thing they always teach you in Crime University is "don't break two laws at the same time." If you have contrabands in your car, don't speed or run red lights, because it brings attention and attentions means jail.
In this case, I didn't know that the archive.today people were doxxed until they started the ddos campaign and caught attention. I doubt anyone in this thread knew or cared about the blogger until he was attacked. And now this entire thing is a matter of permanent record on Wikipedia and in the news. archive.today's attempt at silencing the blogger is only bringing them more trouble, not less.
Barbara_Streisand_Mansion.jpg
The weird thing is that there was nothing new in that blog post. And on top of that it couldn't conclusively say who the owner of archive.today is, so no one still knows.
We do not know what was important in that doxx.
Probably nothing and the DDoS hype was intentional to distract attention and highlight J.P.'s doxx among the other, making them insignificant.
J.P. might be the only one of the doxxers who could promote their doxx in media, and this made his doxx special, not the content?
Anyway, it made the haystack bigger keeping needle the same.
> As far as I understand the person behind archive.today might face jail time if they are found out. You shouldn't be surprised that people lash out when you threaten their life.
One of the really strange things about all of this is that there is a public forum post in which a guy claims to be the site owner. So this whole debacle is this weird mix of people who are angry and saying "clearly the owner doesn't want to be associated with the site" on the one hand, but then on the other hand there's literally a guy who says he's the one that owns the site, so it doesn't seem like that guy is very worried about being associated with it?
It also seems weird to me that it's viewed as inappropriate to report on the results of Googling the guy who said he owns the site, but maybe I'm just out of touch on that topic.
There are even YouTube videos (of GamerGate-time, thus before AI era) with a guy claiming to be the site owner. A bit difficult to OSINT :)
Somebody who a) directs DDOS attacks and b) abuses random visitors' browser for those DDOS attacks is never the victim.
You don't know their motives for running their site, but you do get a clear message about their character by observing their actions, and you'd do well to listen to that message.
The character is completely irrelevant to whether they are a victim of doxxing.
They might be the worst person ever but that doesn't matter. People can be good and bad, sometimes the victim sometimes the perpetrator.
Is it morally wrong to doxx someone and cause them to go to jail because they are running an archive website? Yes. It is. It doesn't matter who the person is. It does not matter what their motivations are.
There are plenty of cases where the operator of archive.today refused to take down archives of pages with people's identifying information, so it's a huge double standard for them to insist on others to not look into their identity using public information.
Irrelevant to a determination of fact, yes. But very relevant to the question of whether or not I care about any of this. Bad thing happened to bad person, lots of drama ensued, come rubberneck the various internet slapfights, details at 11. In other news, water is wet.
Has anyone else noticed that some of Archive.today's X/Twitter captures [1] are logged in with an account called "advancedhosters" [2], which is associated with a web hosting company apparently located in Cyprus? The latest post [3] from the account links to a blog post [4] including private communications between the webmaster of Archive.today (using their previously-known "Volth" alias) and a site owner requesting a takedown. Also note that the previous post [5] from the "advancedhosters" account was a link to a pro-Russia, anti-Ukraine article, archived via Archive.today of course. Seems like an interesting lead to untangle.
[1] https://archive.today/20240714173022/https://x.com/archiveis...
[2] https://x.com/advancedhosters
[3] https://x.com/advancedhosters/status/1731129170091004412
[4] https://lj.rossia.org/users/mopaiv/257.html
[5] https://x.com/advancedhosters/status/1501971277099286539
Lead to what?
I noticed last year that some archived pages are getting altered.
Every Reddit archived page used to have a Reddit username in the top right, but then it disappeared. "Fair enough," I thought. "They want to hide their Reddit username now."
The problem is, they did it retroactively too, removing the username from past captures.
You can see on old Reddit captures where the normal archived page has no username, but when you switch the tab to the Screenshot of the archive it is still there. The screenshot is the original capture and the username has now been removed for the normal webpage version.
When I noticed it, it seemed like such a minor change, but with these latest revelations, it doesn't seem so minor anymore.
> When I noticed it, it seemed like such a minor change, but with these latest revelations, it doesn't seem so minor anymore.
That doesn't seem nefarious, though. It makes sense they wouldn't want to reveal whatever accounts they use to bypass blocks, and the logged-in account isn't really meaningful content to an archive consumer.
Now, if they were changing the content of a reddit post or comment, that would be an entirely different matter.
Editing what is billed as an archive defeats the purpose of an "archive".
Don't be surprised by this, there are a lot more edits than you think. For example, CSS is always inlined so that pages could render the same as it was archived.
> Editing what is billed as an archive defeats the purpose of an "archive".
No, certain edits are understandable and required. Even the archive.org edits its pages (e.g. sticks banners on them and does a bunch of stuff to make them work like you'd expect).
Even paper archives edit documents (e.g. writing sequence numbers on them, so the ordering doesn't get lost).
Disclosing exactly what account was used to download a particular page is arguably irrelevant information, and may even compromise the work of archiving pages (e.g. if it just opens the account to getting blocked).
The relevant part of the page to archive is the content of the page, not the user account that visited the page. Most sane people would consider two archives of the same page with different user accounts at the top, the same page.
It seems a lot of people havent heard of it, but I think its worth plugging https://perma.cc/ which is really the appropriate tool for something like Wikipedia to be using to archive pages.
It costs money beyond 10 links, which means either a paid subscription or institutional affiliation. This is problematic for an encyclopedia anyone can edit, like Wikipedia.
Wikimedia could pay, they have an endowment of ~$144M [1] (as of June 30, 2024). Perma.cc has Archive.org and Cloudflare as supporting partners, and their mission is aligned with Wikimedia [2]. It is a natural complementary fit in the preservation ecosystem. You have to pay for DOIs too, for comparison [3] (starting at $275/year and $1/identifier [4] [5]).
With all of this context shared, the Internet Archive is likely meeting this need without issue, to the best of my knowledge.
[1] https://meta.wikimedia.org/wiki/Wikimedia_Endowment
[2] https://perma.cc/about ("Perma.cc was built by Harvard’s Library Innovation Lab and is backed by the power of libraries. We’re both in the forever business: libraries already look after physical and digital materials — now we can do the same for links.")
[3] https://community.crossref.org/t/how-to-get-doi-for-our-jour...
[4] https://www.crossref.org/fees/#annual-membership-fees
[5] https://www.crossref.org/fees/#content-registration-fees
(no affiliation with any entity in scope for this thread)
> Organizations that do not qualify for free usage can contact our team to learn about creating a subscription for providing Perma.cc to their users. Pricing is based on the number of users in an organization and the expected volume of link creation.
If pricing is so much that you have to have a call with the marketing team to get a quote, i think it would be a poor use of WMF funds.
Especially because volume of links and number of users that wikimedia would entail is probably double their entire existing userbase at least.
Ultimately we are mostly talking about a largely static web host. With legal issues being perhaps the biggest concern. It would probably make more sense for WMF to create their own than to become a perma.cc subscriber.
However for the most part, partnering with archive.org seems to be going well and already has some software integration with wikipedia.
If the WMF had a dollar for every proposal to spend Endowment-derived funds, their Endowment would double and they could hire one additional grant-writer
If the endowment is invested so that it brings very conservative 3% a year, it means that it brings $4.32M a year. By doubling that, rather many grant writers could be hired.
Does Wikipedia really need to outsource this? They already do basically everything else in-house, even running their own CDN on bare metal, I'm sure they could spin up an archiver which could be implicitly trusted. Bypassing paywalls would be playing with fire though.
Archive.org is the archiver, rotted links are replaced by Archive.org links with a bot.
Yeah for historical links it makes sense to fall back on IAs existing archives, but going forward Wikipedia could take their own snapshots of cited pages and substitute them in if/when the original rots. It would be more reliable than hoping IA grabbed it.
Not opposed, Wikimedia tech folks are very accessible in my experience, ask them to make a GET or POST to https://web.archive.org/save whenever a link is added via the Wiki editing mechanism. Easy peasy. Example CLI tools are https://github.com/palewire/savepagenow and https://github.com/akamhy/waybackpy
Shortcut is to consume the Wikimedia changelog firehose and make these http requests yourself, performing a CDX lookup request to see if a recent snapshot was already taken before issuing a capture request (to be polite to the capture worker queue).
This already happens. Every link added to Wikipedia is automatically archived on the wayback machine.
[citation needed]
Ironic, I know. I couldn't find where I originally heard this years ago, but the InternetArchiveBot page linked above says "InternetArchiveBot monitors every Wikimedia wiki for new outgoing links" which is probably referring to what I said.
I didn't know you can just ask IA to grab a page before their crawler gets to it. In that case yeah it would make sense for Wikipedia to ping them automatically.
Why wouldn't Wikipedia just capture and host this themselves? Surely it makes more sense to DIY than to rely on a third party.
Why would they need to own the archive at all? The archive.org infrastructure is built to do this work already. It's outside of WMF's remit to internally archive all of the data it has links to.
Spammers and pirates just got super excited at that plan!
There are various systems in place to defend against them, I recommend against this, poor form against a public good is not welcome.
Archive.org are left wing activists that will agree to censor anything other left wing activists or large companies don't want online.
A bit off topic, but are there any self hosted open source archiving servers people are using for personal usage?
I think ArchiveBox[1] is the most popular. I will give it a shot, but it's a shame they don't support URL rewriting[2], which would be annoying for me. I read a lot of blog and news articles that are split across multiple pages, and it would be nice if that article's "next page" link was a link to the next archived page instead of the original URL.
2: https://github.com/ArchiveBox/ArchiveBox/discussions/1395
I like Readeck – https://codeberg.org/readeck/readeck
Open source. Self hosted or managed. Native iOS and Android apps.
Its Content Scripts feature allows custom JS scripts that transform saved content, which could be used to do URL rewriting.
https://web.archive.org/web/20260220191245if_/https://arstec...
archive.today is very popular on HN; the opaque, shortened URLs are promoted on HN every day
I can't use archive.today. I tried but gave up. Too many hassles. I might be in the minority but I know I'm not the only one. As it happens. I have not found any site that I cannot access without it
The most important issue with archive.today though is the person running it, their past and present behaviour. It speaks for itself
Whomever it is, they have lot of info about HN users' reading habits given that archive.today URLs are so heavily promoted by HN submitters, commenters and moderators
I use archive.today all the time. How do you access pages, like for instance on the economist, without it?
http-request set-header user-agent "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr" if { hdr(host) -m end economist.com }
Years ago I used some other workaround that no longer works, maybe something like amp.economist.com. AMP with text-only browser was a useful workaround for many sitesWorkarounds usually don't last forever. Websites change from time to time. This one will stop working at some point
There are some people who for various reasons cannot use archive.today
Which utility, extension, tool or language is that?
It's from an haproxy configuration file
This unfamiliarity is why I try to use programs that more HN readers are familiar with, like curl or wget, in HN examples. But I find those programs awkward to use. The examples may contain mistakes. I don't use those programs in real life
For making HTTP requests I use own HTTP generators, TCP clients, and local forward proxies
Given the options (a) run a graphical web browser and enable Javascript to solve an archive.today CAPTCHA that contains some fetch() to DDoS a blogger or (b) add a single line to a configuration file and use whatever client I want, no Javascript required, I choose (b)
With the paywall blocker so good it got banned! You can also get it on Android.
https://gitflic.ru/project/magnolia1234/bypass-paywalls-fire...
for instance on the economist: https://news.ycombinator.com/item?id=46060487
If dang and tomhow enforce a policy against paywalled content would garner less interest in accessing those pages via third parties. Most news gets reported by multiple outlets in general, so the same discussions would still surface.
"archive.today" as used here means the collection of archive.tld domains, where .tld could be ".is", ".md", ".ph", etc.
"promoted" as used here means placing an archive.tld URL at the top of an HN thread so that many HN readers will follow it, or placing these URLs elsewhere in threads
you can change the tld of any archive.today link if .today doesn't work. for example archive.ph, archive.is, archive.md, etc
There's a DNS issue between Archive Today and some ISPs which causes their domains not to resolve properly, which is why some people have a lot of trouble using it.
The fact is i cant have a discussion about a paywalled article without reading it. Archive.today is popular as a paywall bypass because nobody wants HN to devolve into debate based on a headline where nobody has rtfa.
> Whomever it is, they have lot of info about HN users' reading habits given that archive.today URLs are so heavily promoted by HN submitters, commenters and moderators
It's not promoted, it's just used as a paywall bypass so everyone can read the linked article.
I believe there are multiple options with different degree of "half-baked"-ness, but can anyone name the best self-hosted version of this service?
Ultimately, what we all use it for is pretty straight-forward, and it seems like by now we should've arrived at having approximately one best implementation, which could be used both for personal archiving and for iternet-facing instances (perhaps even distributed). But I don't know if we have.
I'm wondering the same thing, would be great to have something similar for personal use
Kinda off-topic, but has anyone figured out how archive.today manages to bypass paywalls so reliably? I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous. I figured that they have found an (automated) way to imitate Googlebot really well.
> I figured that they have found an (automated) way to imitate Googlebot really well.
If a site (or the WAF in front of it) knows what it's doing then you'll never be able to pass as Googlebot, period, because the canonical verification method is a DNS lookup dance which can only succeed if the request came from one of Googlebots dedicated IP addresses. Bingbot is the same.
There are ways to work around this. I've just tested this: I've used the URL inspection tool of Google Search Console to fetch a URL from my website, which I've configured to redirect to a paywalled news article. Turns out the crawler follows that redirect and gives me the full source code of the redirected web site, without any paywall.
That's maybe a bit insane to automate at the scale of archive.today, but I figure they do something along the lines of this. It's a perfect imitation of Googlebot because it is literally Googlebot.
I'd file that under "doesn't know what they're doing" because the search console uses a totally different user-agent (Google-InspectionTool) and the site is blindly treating it the same as Googlebot :P
Presumably they are just matching on *Google* and calling it a day.
Sure, but maybe there are other ways to control Googlebot in a similar fashion. Maybe even with a pristine looking User-Agent header.
> which I've configured to redirect to a paywalled news article.
Which specific site with a paywall?
> I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous.
The curious part is that they allow web scraping arbitrary pages on demand. So if a publisher could put in a lot of arbitrary requests to archive their own pages and see them all coming from a single account or small subset of accounts.
I hope they haven't been stealing cookies from actual users through a botnet or something.
Exactly. If I was an admin of a popular news website I would try to archive some articles and look at the access logs in the backend. This cannot be too hard to figure out.
You don't even need active measures. If a publisher is serious about tracing traitors there are algorithms for that (which are used by streamers to trace pirates). It's called "Traitor Tracing" in the literature. The idea is to embed watermarks following a specific pattern that would point to a traitor or even a coalition of traitors acting in concert.
It would be challenging to do with text, but is certainly doable with images - and articles contain those.
You need that sort of thing (i.e. watermarking) when people are intentionally trying to hide who did it.
In the archive.today case, it looks pretty automated. Surely just adding an html comment would be sufficient.
If they use paid accounts I would expect them to strip info automatically. An "obvious" way to do that is to diff the output from two separate accounts on separate hardware connecting from separate regions. Streaming services commonly employ per-session randomized stenographic watermarks to thwart such tactics. Thus we should expect major publishers to do so as well.
At which point we still lack a satisfactory answer to the question. Just how is archive.today reliably bypassing paywalls on short notice? If it's via paid accounts you would expect they would burn accounts at an unsustainable rate.
Watch https://news.ycombinator.com/threads?id=1vuio0pswjnm7 they post AT-free recipes for many paywalls
> which is, of course, ridiculous.
Why? in the world of web scrapping this is pretty common.
Because it works too reliably. Imagine what that would entail. Managing thousands of accounts. You would need to ensure to strip the account details form archived peages perfectly. Every time the website changes its code even slightly you are at risk of losing one of your accounts. It would constantly break and would be an absolute nightmare to maintain. I've personally never encountered such a failure on a paywalled news article. archive.today managed to give me a non-paywalled clean version every single time.
Maybe they use accounts for some special sites. But there is definetly some automated generic magic happening that manages to bypass paywalls of news outlets. Probably something Googlebot related, because those websites usually give Google their news pages without a paywall, probably for SEO reasons.
Using two or more accounts could help you automatically strip account details.
That's actually a really neat idea.
Replace any identifiers like usernames and emails with another string automatically.
I’m an outsider with experience building crawlers. You can get pretty far with residential proxies and browser fingerprint optimization. Most of the b-tier publishers use RBC and heuristics that can be “worked around” with moderate effort.
.. but what about subscription only, paywalled sources?
many publisher's offer "first one's free".
For those that don't , I would guess archive.today is using malware to piggyback off of subscriptions.
I imagine accounts are the only way that archive.today works on sites like 404media.co that seem to have server sided paywalls. Similarly, twitter has a completely server sided paywall.
It’s not reliable, in the sense that there are many paywalled sites that it’s unable to archive.
But it is reliable in the sense that if it works for a site, then it usually never fails.
no tool is 100% effective. Archive.today is the best one we've seen
Sounds like there's a gap in the market for a "commons" archive... maybe powered by something p2p like BitTorrent protocol?
This would have sounded Very Normal in the 2000s... I wonder if we can go back :)
P2p is generally bad for this usecase. P2P generally only works for keeping popular content around (content gets dropped when the last peer that cares disconnects). If the content was popular it wouldnt need to be archived in the first place.
Imagine a proof-of-space cryptocurrency that encouraged archiving long-tail data.
There is an enormous amount of stuff that is only on archive.today, including stuff that is otherwise gone forever. A mix of stuff that somebody only ever did archive.today on and not archive.org, and stuff that could only be archived on archive.today because archive.org fails on it.
Anything on twitter post-login-wall for one. A million only-semi-paywalled news articles for others. But mainly an unfathomably long tail.
It was extremely distressing when the admin started(?) behaving badly for this reason. That others are starting to react this way to it is understandable. What a stupid tragedy.
Archive.is is now publishing really weird posts on their Tumblr blog, related to the whole thing
https://archive-is.tumblr.com/post/806832066465497088/ladies...
https://archive-is.tumblr.com/post/807584470961111040/it-see...
He’s probably being purposefully vague which makes for difficult reading.
The word salad with ukraine, arms trade, nazis, hunter biden, leave no doubt the operator is from Russia.
Wikipedia's own page on this topic is much more succinct about the context and change in policy
https://en.wikipedia.org/wiki/Wikipedia:Archive.today_guidan...
I noticed I've started being redirected to a blank nginx server for archive.is... but only the .is domain, .ph and .today work just fine. I wonder if they ended up on an adblocker or two.
There was some beef the site owner had with Cloudflare where if your were using Cloudflare DNS it wouldn’t serve anything to you? Is that still happening?
Not sure why it would only be on archive.is and not the others but ‘is’ loads for me.
Oh maybe... I don't use cloudflare DNS, but maybe one of my rpz zones does something weird...
> If you want to pretend this never happened – delete your old article and post the new one you have promised. And I will not write “an OSINT investigation” on your Nazi grandfather
From hero to a Kremlin troll in five seconds.
Archive.today's domain registrar is Tucows for anyone wondering
FYI, archive.today is NOT the Internet Archive/Wayback Machine.
I prefer archive.today because the Internet Archive’s Wayback Machine allows retrospective removals of archived pages. If a URL has already been crawled and archived, the site owner can later add that URL to robots.txt and request a re-crawl. Once the crawler detects the updated robots.txt, previously stored snapshots of that page can become inaccessible, even if they were captured before the rule was added.
Unfortunately this happens more often than one would expect.
I found this out when I preserved my very first homepage I made as a child on a free hosting service. I archived it on archive.org, and thought it would stay there forever. Then, in 2017 the free host changed the robots.txt, closed all services, and my treasured memory was forever gone from the internet. ;(
This information is now many years out of date - they no longer have this policy.
Even so you can still just request your site to be removed: https://help.archive.org/help/how-do-i-request-to-remove-som...
The FBI called out archive.today a couple months ago, there's clearly a campaign against them by the USA (4th Reich), which stands principally against any information repository they don't control or have influence over (its Russian owned). This is simply donors of the Trump regime who own media companies requesting this because its the primary way around paywalls for most people who know about it.
"Non-paywalled" ad-free link to archive: https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment...
> an analysis of existing links has shown that most of its uses can be replaced.
Oh? Do tell!
> the community should figure out how to efficiently remove links to archive.today
You're part of the community! Prove him right!
I would be suprised if archive.today had something that was not in the wayback machine
Archive.today has just about everything the archived site doesn't want archived. Archive.org doesn't, because it lets sites delete archives.
I know that sometimes the behavior of each archiver service is a bit different. For example, it's possible that both Archive.today and the Internet Archive say they have a copy of a page, but then when you open up the IA version, you might see that it renders completely differently or not at all. It might be caused because the webpage has like two scrollbars, or maybe there's a redirect that happens when a link to the page is loaded. I notice this seems to happen on documentation pages that are hosted by Salesforce. It can be a bit of a pain if you want to save to save a backup copy online of a release note or something like that for everyone to easily reference in the future.
Wayback machine removes archives upon request, so there’s definitely stuff they don’t make publicly available (they may still have it).
You don't even need to do requests if you are the owner of the URL. Robot.txt changes are applied in retrospect, which means you can disallow crawls to /abc, request a re-crawl, and all snapshots from the past which match this new rule will be removed.
Trying to search the Wayback machine almost always gives me their made-up 498 error, and when I do get a result the interface for scrolling through dates is janky at best.
Accounts to bypass paywalls? The audacity to do it?
Oh yeah those where a thing. As a public organization they can't really do that.
I personally just don't use websites that paywall important information.
>> an analysis of existing links has shown that most of its uses can be replaced.
>Oh? Do tell!
They do. In the very next paragraph in fact:
The guidance says editors can remove Archive.today links when the original
source is still online and has identical content; replace the archive link so
it points to a different archive site, like the Internet Archive,
Ghostarchive, or Megalodon; or “change the original source to something that
doesn’t need an archive (e.g., a source that was printed on paper)> “I’m glad the Wikipedia community has come to a clear consensus, and I hope this inspires the Wikimedia Foundation to look into creating its own archival service,” he told us.
Hardly possible for Wikimedia to provide a service like archive.today given the legal trouble of the latter.
Strangely naive.
So toward the end of last year, the FBI was after archive.today, presumably either for keeping track of things the current administration doesn't want tracked, or maybe for the paywall thing (on behalf of rich donors/IP owners). https://gizmodo.com/the-fbi-is-trying-to-unmask-the-registra...
That effort appears to have gone nowhere, so now suddenly archive.today commits reputational suicide? I don't suppose someone could look deeper into this please?
The archive.today operator claims on his blog that this was nothing major: https://lj.rossia.org/users/archive_today/
> Regarding the FBI’s request, my understanding is that they were seeking some form of offline action from us — anything from a witness statement (“Yes, this page was saved at such-and-such a time, and no one has accessed or modified it since”) to operational work involving a specific group of users. These users are not necessarily associates of Epstein; among our users who are particularly wary of the FBI, there are also less frequently mentioned groups, such as environmental activists or right-to-repair advocates.
> Since no one was physically present in the United States at that time, however, the matter did not progress further.
> You already know who turned this request into a full-blown panic about “the FBI accusing the archive and preparing to confiscate everything.”
Not sure who he's talking about there.
>In emails sent to Patokallio after the DDoS began, “Nora” from Archive.today threatened to create a public association between Patokallio’s name and AI porn and to create a gay dating app with Patokallio’s name.
Oh good. That's definitely a reasonable thing to do or think.
The raw sociopathy of some people. Getting doxxed isn't good, but this response is unhinged.
It's a reminder how fragile and tenuous are the connections between our browser/client outlays, our societal perceptions of online norms, and our laws.
We live at a moment where it's trivially easy to frame possession of an unsavory (or even illegal) number on another person's storage media, without that person even realizing (and possibly, with some WebRTC craftiness and social engineering, even get them to pass on the taboo payload to others).
I mean, the admin of archive.today might face jail time if deanonymised, kind of understandable he's nervous. Meanwhile for Patokallio it's just curiosity and clicks
That was private negotiations, btw, not public statements.
In response to J.P's blog already framed AT as project grown from a carding forum + pushed his speculations onto ArsTechnica, whose parent company just destroyed 12ft and is on to a new victim. The story is full of untold conflicts of interests covered with soap opera around DDoS.
Why does it matter it was a private communications?
It’s still a threat isn’t it?
Can you elaborate on your point?
The fight is not about where it is shown and not about what, not about "links in Wikipedia", but about whether News Inc will be able to kill AT, as they did with 12FT.
What is News Inc? Are they a funder of Wikipedia(I think Wikipedia didn’t have a parent company so they’re not owners)?
They are owner of ArsTechnica which wrote 3rd (or 4th?) article on AT in a row painting it in certain colors.
The article about FBI subpoena that pulled J.P's speculations out of the closet was also in ArsTechnica and by the same author, and that same article explicitly mentioned how they are happy with 12ft down
… Ars is owned by Conde Nast?
from the Ars article:
--- US publishers have been fighting web services designed to bypass paywalls. In July, the News/Media Alliance said it secured the takedown of paywall-bypass website 12ft.io. “Following the News/Media Alliance’s efforts, the webhost promptly locked 12ft.io on Monday, July 14th,” the group said. (Ars Technica owner Condé Nast is a member of the alliance.) ---
Anecdotally I generally see archive.is/archive.today links floating around "stochastic terrorist" sites and other hate cults.
They seem totally unrelated to the Internet Archive. They probably only ever got on Wikipedia by leeching of the IA brand and confusing enough people to use them
Wayback machine won't bypass paywall nor pirate content, not to mention they are under US jurisdiction. You can't have your cake and eat it.
Honestly, IMHO archive.today is just so much nicer to use in every aspect than IA, that unless they outright start to distribute malware (I mean, like, via the page itself — otherwise it's pretty much irrelevant), I don't think I'll stop using it.
At this point Archive.today provides a better service (all things considered) compared to Wikipedia, at least when it comes to current affairs.
Why not show both? Wikipedia could display archive links alongside original sources, clearly labeled so readers know which is which. This preserves access when originals disappear while keeping the primary source as the main reference.
The objection is to this specific archieve service not archiving in general.
Wikipedia shouldn't allow links to sites which intentionally falsify archived pages and use their visitors to perform DDOS attacks.
They generally do. Random example, citation 349 on the page of George Washington: ""A Brief History of GW"[link]. GW Libraries. Archived[link] from the original on September 14, 2019. Retrieved August 19, 2019."
This will always be done unless the original url is marked as dead or similar.
Anyone has a short summary as to who and why Archive.today acted via DDos? Isn't that something done by malicious actors? Or did others misuse Archive.today?
If you read the linked article it is discussed
I will no longer donate to Wikipedia as long as this is policy.
Why? The decision seems reasonable at first sight.
Second sight is advisable in such cases. Fact is, archives are essential to WP integrity and there's no credible alternative to this one.
I see WP is not proposing to run its own.
Wouldn't it be precisely because archives are important that using something known to modify the contents would be avoided?
> something known to modify the contents would be avoided?
Like Wikipedia?
No, not like that. There's a difference between a site that:
1) provides a snapshot of another site for archival purposes. 2) provides original content.
You're arguing that since encyclopedias change their content, the Library of Congress should be allowed to change the content of the materials in its stacks.
By modifying its archives, archive.today just flushed its credibility as an archival site. So what is it now?
> You're arguing that since encyclopedias change their content, the Library of Congress should be allowed to change the content of the materials in its stacks.
As an end user of Wikipedia there are occasions where content has been scrubbed and/or edits hidden. Admins can see some of those, but end users cannot (with various justifications, some excellent/reasonable and some.. nebulous). That's all I'm saying, nothing about Congress or such other nonsense. It seems like an occasion of the pot calling the kettle names from this side of the fence.
But Wikipedia promises you that it will modify its content. They're transparent about that promise.
An archival site (by default definition) promises you that it will not modify its content. And when it does, it's no longer an archival site.
Wikipedia has never been an archival site and it never will be. archive.today was an archival site, but now it never will be again.
Obviously not, since archive.org is encouraged.
What exactly is credible about archive.today if they are willing to change the archive to meet some desire of the leadership? That's not credible in the least.
A lot more credible than archive.org that lets archives be changed and deleted by the archive targets.
What's your better idea?
Does archive.org really let its archives be changed? That's very different than letting them be deleted from a credibility perspective.
Yes.
Archive.org snapshots may load javascript from external sites, where the original page had loaded them. That script can change anything on the page. Most often, the domain is expired and hijacked by a parking company, so it just replaces the whole page with ads.
Example: https://web.archive.org/web/20140701040026/http://echo.msk.r...
----
And another example: https://web.archive.org/web/20260219005158/https://time.is/
The page "got changed" every second. It is easy to make an archived page which would show different content depending on current time or whether you have Mac or Windows, or your locale, or browser fingerpring, or been tailored for you personally
I don't think it's fair to equate running JS that can change the rendered output with the archive server actually changing the HTML it sends back.
> the archive targets
Isn't there a substantial overlap with the copyright holders?
The operators() of archive.today (and the other domains) are doing shadey things and the links are not working so why keep the site around as for example Internet archives waybackmachine works as alternative to it.
What archive.today links are not working?
> Internet archives wayback machine works as alternative to it.
It is appalling insecure. It lets archives be altered by page JS and deleted by the page domain owner.
Currently as far as I know at least both archive.today and archive.is have the same ddos code on the main page. For more details https://gyrovague.com/2026/02/01/archive-today-is-directing-...
> there's no credible alternative to this one.
But this one is not credible either so...
> Fact is, archives are essential to WP integrity and there's no credible alternative to this one.
Yes, they are essentional, and that was the main reason for not blacklisting Archive.today. But Archive.today has shown they do not actually provide such a service:
> “If this is true it essentially forces our hand, archive.today would have to go,” another editor replied. “The argument for allowing it has been verifiability, but that of course rests upon the fact the archives are accurate, and the counter to people saying the website cannot be trusted for that has been that there is no record of archived websites themselves being tampered with. If that is no longer the case then the stated reason for the website being reliable for accurate snapshots of sources would no longer be valid.”
How can you trust that the page that Archive.today serves you is an actual archive at this point?
> If ... If ...
Oh dear.
> How can you trust that the page that Archive.today serves you is an actual archive at this point?
Because no-one shown evidence that it isn't.
The quote uses ifs because it was written before this was verified, but the Wikipedia thread in question has links to evidence of tampering occurring.
Lets see them, then.
They referring to https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment... ?
Did you not read the article? They not only directed a DDOS against a blogger who crossed them, but altered their own archived snapshots to amplify a smear against them. That completely destroys their trustworthiness and credibility as a source of truth.
Sure I read it. But I don't believe everything I read on the internet.
Altered snapshots = hide Nora name?
ArsTechica just did the same - removed Nora from older articles. How can you trust ArsTechica after that?
They didn't just remove her name, but replaced it with the target's name.
I don't know what you're talking about re: Ars removing her name from old articles.
About how much had you previously donated over the years?