• arjie 12 hours ago

    Something I wish we could have is some kind of peer mirror of archive.org. The main IA web application gets angry pretty quickly if you're trying to click through a few different dates. If there were some kind of way to slowly mirror (torrent-style) and offer pages as a peer from archive.org that would be neat. It would be cool to show up as an alternative source for the data and the archive.org app could fetch it out of there on a user's choice and validate the checksum if required.

    In the end, I've ended up just keeping my own ArchiveBox and it's an all right experience. In the end, it's only useful for things I know I wanted to archive. For almost everything I go to the IA - which has so much.

    • 1vuio0pswjnm7 8 minutes ago

      "Something I wish we could have is some kind of peer mirror of archive.org."

      As a reasonably heavy IA user, this is something I have wondered about over the years. I do not use the "web application". I prefer to send Memento requests from the command line. Indeed, IA can sometimes be sluggish. Most of the time it isn't. The speed variance seems to be based, at least partially, on time of day and day of week. IA can also be very fast. Most of the time, it's fast enough.

      Perhaps there are not that many people actually using IA _consistently_ on a daily basis, for example, to retrieve "current" files from the web. It is not perfect but IMHO it is probably more useful than most people believe. The more heavily I have relied on it over the years to attempt to retrieve _any_ web page, the more I have been suprised by how well it works. Again, it's not perfect but it is remarkably useful nonethless.

      Like Wikipedia, IA is IMO far more efficient than "Big Tech" in terms of the information value it can provide relative to the amount of funding, effort and paid staff that is allocated to run it. Now that Google removed public access to its cache, IMHO, IA has no real competition^1

      1. HN replies might try to argue that archive.today is a viable IA alternative. But there are significant differences, for example, (a) AT only archives text and images, no PDF, tarballs, zip, etc., (b) AT is a constant target of censorship, (c) AT is configured to serve CAPTCHAs based on a _single_ HTTP request, (d) AT's archive only dates back to 2012, (e) the size of AT's archive is miniscule compared to IA, (f) AT does not crawl, AT only archives on demand

      • pronoiac 4 hours ago

        The Archive Team - not part of the Internet Archive - worked on a distributed backup of a portion of the Internet Archive - https://wiki.archiveteam.org/index.php/INTERNETARCHIVE.BAK

        It's been dormant / on hiatus for a few years now.

        • smallerize 2 hours ago

          That can only cover other collections though, because the WARC files from the Wayback Machine web scrapes are not public.

        • renegat0x0 5 hours ago

          - I can confirm that the web archive can be really slow

          - I think I have seen that AI scrapers create bottleneck in the bandwidth

          - To some digital archives you need to create scientific accounts (I think Common Crawl works like that)

          - Data quite easily can be very big. The goal is to store many things. We not only store Internet, but with additional dimension of time

          - Since there is a lot of data, it is difficult to navigate it, search it, so it easily can become unusable

          - For example that is why I created my own meta data link, I needed some information about domains

          Link:

          https://github.com/rumca-js/Internet-Places-Database

          • uses an hour ago

            Yeah, I did a scraping project a while back where I wanted to look back at historical snapshots. Getting the info out of Internet Archive was surprisingly difficult. I ended up using https://pypi.org/project/pywaybackup/, which helped quite a bit.

            • giancarlostoro 3 hours ago

              I do wonder why IA does not maintain a IPFS instance, or if they do, why they're not more popular? There's tons of IPFS mirror services out there that operate at reasonable speeds. One issue I've run into with IA is old enough websites that there's JS or CSS that just wont render, what I'm not sure about is, can we retroactively fix such things? Would be nice to be able to un-ruin the code somehow if they exported everything possible at the time.

              Edit:

              Would be really neat if you could click on a domain while on IA, and a desktop client downloads as many WAR files in a slower priority download queue, as many as you're interested in, with higher priority pages first, and then you can view it fully offline.

              • stavros 3 hours ago

                Because nobody pins on IPFS. It's basically http with extra steps, at this point.

                • TechSquidTV 3 hours ago

                  They do torrents. I was looking into this recently as well, considering building an Activity Pub alternative to IA. I came to what I assume is the same conclusion that IA came to.

                  No one uses IPFS. For the average user, it is significantly more difficult to get started. For the experienced user, the ecosystem of tools around IPFS is extremely small.

                  All in all, IPFS offers very little benefit over torrents in practice and has a much smaller user pool.

                  • kevincox 3 hours ago

                    The problems with the torrents is that they can be updated if the file changes (sometimes small metadata changes) and now your seeders can't be found. Maybe if they also kept a list of old hashes so that you could at least manually try to recover data from the older torrent?

                    • Lammy an hour ago

                      This is outdated information. These issues have been solved by various BitTorrent Enhancement Proposals. You do create a new torrent, but you distribute it in a way that to a swarm member is functionally equivalent to updating an old torrent. Check out BEP-0039 and BEP-0046 which respectively cover the HTTP and DHT mechanisms for updating torrents:

                      https://www.bittorrent.org/beps/bep_0039.html

                      https://www.bittorrent.org/beps/bep_0046.html

                      If that updated torrent is a BEP-0052 (v2) torrent it will hash per-file, and so the updated v2 torrent will have identical hashes for files which aren't changed: https://www.bittorrent.org/beps/bep_0052.html

                      This combines with BEP-0038 so the updated torrent can refer to the infohash of the older torrents with which it shares files, so if you already have an old one you only have to download files that have changed: https://www.bittorrent.org/beps/bep_0038.html

                      • landl0rd 10 minutes ago

                        I believe libtorrent got updated with BEP-0052 support but can't recall any major implementation supporting 39/46. Am I wrong on that?

                        • NoMoreNicksLeft an hour ago

                          Have any of these even started to be implemented in any client/library? It's been years.

                      • outside1234 3 hours ago

                        IPFS is a great idea poorly executed. Content addressable storage is a great idea, but it is so difficult to use in practice for real world scaled scenarios (larger than one hard disk drive).

                      • komali2 3 hours ago

                        I spent a bit of time trying to find it just now but I swear I read a super long blog or comment or something by someone at archive.org where they concluded essentially that IPFS just "isn't ready" or wasn't feasible for their needs because it's super slow and they didn't see how that couldn't be the case when they consider the volume of transactions they need to do (they didn't see an optimization path).

                        I wish I could find that article!

                        edit: https://github.com/internetarchive/dweb-archive/blob/master/...

                      • stavros 3 hours ago

                        I have a design for a system where you can "donate" your disk space to a provider. Basically, you run the client, you say you want to make 1TB available to archive.org, and their server can push the rarest content to your computer.

                        It's based on torrents, and you can easily make a content delivery system on top of this (so people can fetch data from this network).

                        I emailed a few archiving teams but nobody seemed interested, so I never made it.

                        • toomuchtodo 3 hours ago

                          It's a hard problem to solve, because its easy to temporarily donate resources to archiving ops via the ArchiveTeam warrior, but a long term commitment to run persistent compute and storage to mirror a chunk of the internet archive. It's why I think Filecoin isn't going to work either; very little overlap between the people who feel its important to keep these archives alive versus people who would run distributed storage to collect financial compensation for doing so.

                          Easier to send fiat to IA for them to invest (~$2/GB) and to pay to keep the disks spinning somewhere safe across the world.

                          (ia volunteer, no affiliation otherwise)

                          • stavros 3 hours ago

                            The system I have in mind is strictly volunteer-run, and it automatically balances the files so that it minimises rare copies.

                            You're right, though, long-term commitment is rare from volunteers. That's why the idea is to make short-term commitment so easy that you have a good enough pool of short-termers that it works out in the aggregate.

                            • toomuchtodo 3 hours ago

                              Appreciate your work on this.

                              • stavros 3 hours ago

                                Eh I didn't really do any work, it's just a design right now, but I think it's a nice one. If any archive team wants to work with me on this, I'd be happy to make it a reality so we have a nice FOSS system for distributed, volunteer-led backups.

                                • toomuchtodo 3 hours ago

                                  I suggest emailing textfiles, he'll know who to connect you with in ArchiveTeam, and if there is an opportunity to connect with the decentralized web folks at ia. Strongly believe your architecture is superior to filecoin and IPFS due to relying on torrent primitives.

                                  (ia source of truth, storage system of last resort -> item index -> torrent index -> global torrent swarm)

                                  • stavros 3 hours ago

                                    Thanks, I will!

                          • 1gn15 3 hours ago

                            Anna's Archive has this system. This also sounds like Freenet.

                            • stavros 3 hours ago

                              Freenet has a bunch of encryption, which is out of scope for this. What does Anna's Archive have, besides torrents?

                              • 1gn15 an hour ago

                                I'm a bit confused. Isn't this such a system where people can volunteer disk space?

                                https://annas-archive.org/torrents

                                I think I'm misunderstanding you.

                                • stavros an hour ago

                                  My system is more "I want to donate X GB" and it handles everything, filling that space up, getting the rarest torrents, getting updates, etc. Think of it as a central server managing a globally-distributed, unreliable JBOD in a "push" manner, rather than just downloading a torrent and being done.

                          • zapataband2 12 hours ago

                            Is there such thing as "versioned" torrents? Assuming you have the right PGP key you could mix bittorrent and packaging systems to get an update-able distribution

                            • throawayonthe 6 hours ago

                              trere is the bittorrent v2 standard: https://blog.libtorrent.org/2020/09/bittorrent-v2/

                              but unfortunately most foss torrent clients do not support it, partly because at release libtorrent 2.0.x had poor io performance in some cases so torrent clients reverted to the 1.2.x branch

                              • pabs3 12 hours ago
                                • pronoiac 3 hours ago

                                  I think SciOp is doing something in that area, with a catalog site and webseeds. https://sciop.net/

                                  • hsbauauvhabzb 12 hours ago

                                    A Torrent would probably suffocate under the small file distribution. I’m not sure how the romset torrents work but I thought they were versioned.

                                    But torrent is probably the wrong tech. I’m sure there would be many players willing to host a few TB or more each, which could be fronted via something so it’s transparent to the user.

                                    But a better option might be a subscription model, anything else will be slammed by crawlers.

                                • jonah-archive 11 hours ago

                                  Hi, I run the datacenter/infrastructure team at the Internet Archive! We would love to see you at our various events this fall but if paying for the ticket is difficult for you, please email me (in bio) and we'll get you in (if possible).

                                  • psychoslave 10 hours ago

                                    Are they distributed events all around the world of just in wherever the team is gathered (San Francisco I guess?)

                                    By the way, thank you all the teams in IA, what you provide is such an important thing for humanity.

                                    • zhynn an hour ago

                                      Thanks for helping to run my favorite library on earth.

                                      • NetOpWibby 11 hours ago

                                        I would love to work for IA but openings are rare

                                        • pabs3 10 hours ago

                                          If you are in Europe, consider Software Heritage (similar to IA but for source code) too:

                                          https://www.softwareheritage.org/jobs/

                                          • msephton 7 hours ago

                                            Internet Archive now have a presence in Amsterdam

                                        • vettyvignesh 9 hours ago

                                          would love technical details around this feat. ex: how you even crawl to begin with, storage, etc

                                          • awesomeMilou 11 hours ago

                                            What events are we talking about here?

                                          • moralestapia 10 hours ago

                                            Hey, Q., so what's the size of the internet archive?

                                            • the_real_cher 6 hours ago

                                              I'm betting exabyte or close maybe

                                              • metalman 6 hours ago

                                                it is large enough that I am wondering if the data captured by the actual physical magnetic charges has a heft, that a person could feel. obviously the hardware would fill a house or something, but at what point does the worlds data become a discernable physical reality, at least in theory

                                              • southernplaces7 8 hours ago

                                                Most of all, i'm curious about how you reliably and securely store or host so many archived pages. Would you mind briefly explaining such a huge undertaking? Also, total congratulations on the fantastic achievement of this. You guys are my go-to for so much information.

                                                Edit: And how many terabytes it all amounts to.

                                                • WhereIsTheTruth 9 hours ago

                                                  We all know the NSA has access to servers hosted in the U.S. How are you protecting the archive from malicious tampering? Are you using any form of immutable storage? Is it post-quantum secure?

                                                  • gosub100 5 hours ago

                                                    Why would they do that? Have you previously seen a case where they "maliciously tampered" with anyone's website?

                                              • msephton 7 hours ago

                                                1 trillion web pages archived is quite an achievement. But...there's no way to search them? You have to know what url your want to pull from the archive, which reduces the usefulness of the service. I'd like to search through all those trillion pages for, say, the name of an artist, or for a filename, or for image content.

                                                • 1gn15 3 hours ago

                                                  I remember this functionality existing on Kagi or something. But I can't find it.

                                                  • qwertytyyuu 7 hours ago

                                                    That would be hell to index

                                                    • Exuma 6 hours ago

                                                      I imagine it would be no different than current indexing strategies with a temporal aspect baked in... it would act almost like a different site, and maybe roll up the results after the fact by domain

                                                      • citbl 7 hours ago

                                                        If it was a commercial problem, e.g. from Google, it would be solved.

                                                        The reality is that many things don't exist simply because someone isn't paid to do it.

                                                        • Keyframe 6 hours ago

                                                          How much AI companies have benefited by leeching off of IA and Common Crawl, it's a shame there's no at least some money flowing back in.

                                                      • bluebarbet 6 hours ago

                                                        Consider the privacy implications of that. It would effectively create a parallel web where `robots.txt` counts for nothing and where it becomes - retroactively - impossible to delete one's site. Yes, there's ultimately no way to prevent it happening, given that the data is public. But to make the existing IA searchable is IMO just a terrible idea.

                                                        • 1gn15 an hour ago

                                                          Related: https://wiki.archiveteam.org/index.php/Robots.txt

                                                          (Also, consider that when you forbid such functionality, the only thing that happens is that its development becomes private. It's like DRM: it only hurts legitimate customers.)

                                                          • breakingcups 4 hours ago

                                                            Actually, I believe the IA respects robots.txt retroactively, eg. putting something on the disallow list now removes the same page scrapes from a yeaer ago from public access in teh Wayback Machine, but I'd love to be corrected on that.

                                                            • 1gn15 44 minutes ago

                                                              IIRC the IA no longer cares about robots.txt after it kept getting abused [1] to take down older pages. You can still request to take down pages, but it needs a form and a reason. [2]

                                                              (Remember, robots.txt is not a privacy measure, it's supposed to be something that prevents crawlers from getting stuck in tar pits!)

                                                              [1] https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea...

                                                              [2] https://help.archive.org/help/how-do-i-request-to-remove-som...

                                                              • bluebarbet 3 hours ago

                                                                It may do. I remember looking into it and not getting a definitive answer. The issue here is that taking a site offline has surely been widely understood as the ultimate robots.txt `Disallow` instruction to search engines. IMO we should respect that.

                                                            • emporas 6 hours ago

                                                              I use GPT web search, and I ask it usually to find textbooks from IA. It works really well for textbooks, but not sure about web pages.

                                                            • pabs3 11 hours ago

                                                              If anyone wants to help feed in more stuff, ArchiveTeam is a related volunteer group that sends data to IA:

                                                              https://archiveteam.org/

                                                              • totaldude87 2 hours ago

                                                                So instead of scrapping all webpages, one just has to pay Archive and get all the data?

                                                                • ks2048 7 hours ago

                                                                  I wonder if Internet Archive and Common Crawl have worked together?

                                                                  How does their scope or infrastructure compare?

                                                                  I know they serve different purposes, but both are essentially doing similar things.

                                                                  • pabs3 6 hours ago

                                                                    I think IA ingests crawl WARCs from CC, as well as other groups like ArchiveTeam.

                                                                  • BiraIgnacio 5 hours ago

                                                                    Congratulations!

                                                                    • yupyupyups 4 hours ago
                                                                      • ChrisArchitect 12 hours ago
                                                                        • zghst 11 hours ago

                                                                          A great milestone for internet history!

                                                                          • not--felix 8 hours ago

                                                                            I wonder if openai has archived more pages by now

                                                                            • FooBarWidget 11 hours ago

                                                                              I'm kinda surprised IA hasn't long been shutdown by copyright chasers.

                                                                              And for single page archives I tend to use archive.is nowadays. For as long as I can remember, IA has been unusably slow.

                                                                              But still kudos to them for the effort.

                                                                              • groos an hour ago

                                                                                It wasn't shut down but definitely hobbled after they lost the lawsuit and were forced to pull copyrighted content from their site that they used to allow signed-in users to check out an hour at a time. My visits to the site dropped 10x after this.

                                                                                • fragmede 10 hours ago

                                                                                  I very much don't get all of the show "king of the hill" being up on there.

                                                                                • i_have_to_speak 10 hours ago

                                                                                  Is there an index of all these pages?

                                                                                  • typpilol 12 hours ago

                                                                                    I thought this was going to be a technical article but there was nothing in it

                                                                                    • ehsanu1 12 hours ago

                                                                                      Seeing some stats would be fun. I wonder what the amount of data is here. And the distribution would be interesting too, especially since some pages are archived at multiple points in time, and pages have been getting heavier these days.

                                                                                    • lyu07282 9 hours ago

                                                                                      I was hoping this would include a talk by Jason Scott/@textfiles his talks are always so much fun

                                                                                      • lofaszvanitt 9 hours ago

                                                                                        Would be nice to have visit statistics per domain. So people who host their live sites could determine who visits and what on archive.org under their domain vs their live site :).

                                                                                        • timmy777 9 hours ago

                                                                                          How do you prevent government (and other people who can access the data) from rewriting history?

                                                                                          Do you hash them in some sort of block chain?

                                                                                          The inability to rewrite history will be a fantastic gift to the world.

                                                                                          • itsme0000 11 hours ago

                                                                                            Yeah but their view and download metrics are flat out wrong all the time. If they weren’t a nonprofit they’d be sued for that. But still great company a place for obsolete AWS equipment to retire.

                                                                                            • psychoslave 10 hours ago

                                                                                              What do you mean?