• openrisk 2 hours ago

    Its not obvious to me that what is missing here is another technical protocol rather than more effective 'social protocols'. If you havent noticed, the major issues of today is not the scaling of message passing per-se but the moderation of content and violations of the boundary between public and private. These issues are socially defined and cannot be delegated to (possibly algorithmic) protocols.

    In other words what is missing is rules, regulations and incentives that are adapted to the way people use the digital domain and enforce the decentralized exchange of digital information to stay within a consensus "desired" envelope.

    Providing capabilities in code and network design is ofcourse a great enabler, but drifting into technosolutionism of the bitcoin type is a dead end. Society is not a static user of technical protocols. If left without matching social protocols any technical protocol will be exploited and fail.

    The example of abusive hyperscale social media should be a warning: they emerged as a behavior, they were not specified anywhere in the underlying web design. Facebook is just one website after all. Tim Berners-Lee probably did not anticipate that one endpoint would succesfully fake being the entire universe.

    The deeper question is, do we want the shape of digital networks to reflect the observed concentration or real current social and economic networks or do we want to use the leverage of this new techology to shape things in a different (hopefully better) direction?

    The mess we are in today is not so much failure of technology as it is digital illiteracy, from the casual user all the way to the most influential legal and political roles.

    • defanor 23 minutes ago

      AIUI, the "Decentralized" added to RSS here stands for:

      - Propagation (via asynchronous notifications). Making it more like NNTP. Though perhaps that is not very different functionally from feed (RSS and Atom) aggregators: those just rely on pulling more than on pushing.

      - A domain name per user. This can be problematic: you have to be a relatively tech-savvy person with a stable income and living in an accommodating enough country (no disconnection of financial systems, blocking of registrar websites, etc) to reliably maintain a personal domain name.

      - Mandatory signatures. I would prefer OpenPGP over a fixed algorithm though: otherwise it lacks cryptographic agility, and reinvents parts of it (including key distribution). And perhaps to make that optional.

      - Bitcoin blockchain.

      I do not quite see how those help with decentralization, though propagation may help with discovery, which indeed tends to be problematic in decentralized and distributed systems. But that can be achieved with NNTP or aggregators. While the rest seems to hurt the "Simple" part of RSS.

      • nunobrito 2 hours ago

        NOSTR has solved most of these topics in a simple way. Anyone can generate a private/public key without emails or password, and anyone can send messages that you can verify as truly belonging to the person with that signature.

        They have hundreds of servers running today by volunteers, there is little cost of entry since even cellphones can be used as servers (nodes) to keep you private notes or keep the notes from people you follow.

        There is now a file sharing service called "Blossom" which is decentralized in the same simple manner. I don't think I've seen there a way to specify custom domains, people can only use the public key for the moment to host simple web pages without a server behind.

        Many of the topics in your page are matching with has been implemented there, it might be a good match for you to improve it further.

        • remram 10 hours ago

          I would love to have an RSS interface where I can republish articles to a number of my own feeds (selectively or automatically). Then I can follow some my friends' republished feeds.

          I feel like the "one feed" approach of most social platform is not here to benefit users but to encourage doom-scrolling with FOMO. It would be a lot harder for them to get so much of users' time and tolerance for ads if it were actually organized. But it seems to me that there might not be that much work needed to turn an RSS reader into a very productive social platform for sharing news and articles.

          • James_K 3 minutes ago

            This interface already exists. It's called RSS. Simply make feed titled "reposts" and add entries linking to other websites. I already have such a thing on my own website with the precise hope that others will copy it.

            • fabrice_d 8 hours ago

              That looks close to custom feeds in the ATProto / BlueSky world.

            • glenstein 9 hours ago

              While everyone is waiting for Atproto to proto, ActivityPub is already here. This is giving me "Sumerians look on in confusion as god creates world" vibes.

              https://theonion.com/sumerians-look-on-in-confusion-as-god-c...

              • echelon 5 hours ago

                These are still too centralized. The protocol should look more like BitTorrent.

                - You don't need domain names for identity. Signatures are enough. An optional extension could contain emails and social handles in the payload if desired.

                - You don't need terabytes of storage. All content can be ephemeral. Nodes can have different retention policies, and third party archival services and client-side behavior can provide durable storage, bookmarking/favoriting, etc.

                - The protocols should be P2P-first rather than federated. This prevents centralization and rule by federated cabal. Users can choose their own filtering, clustering, and prioritization.

              • wmf 9 hours ago

                1. Domain names: good.

                2. Proof of work time IDs as timestamps: This doesn't work. It's trivial to backdate posts just by picking an earlier ID. (I don't care about this topic personally but people are concerned about backdating not forward-dating.)

                N. Decentralized instances should be able to host partial data: This is where I got lost. If everybody is hosting their own data, why is anything else needed?

                • brisky 3 hours ago

                  Hi, author here. Regarding backdating it is a valid concern. I did not mention in the article, but in my proposed architecture users could post links of others (consider that a retweet). For links that have reposts there could exist additional security checks implemented to check validity of post time.

                  Regarding hosting partial data: there should be an option to host just recent data for the past month or other time frames and not full DB of URLs. This would make decentralization better as each instance could have less storage requirements, but total information would be present on the network.

                  • macawfish 8 hours ago

                    Domain names are fine but they shouldn't be forced onto anyone. Nothing about DID or any other flexible and open decentralized naming/identity protocol will prevent anyone from using domain names if they want to.

                    • hinkley 9 hours ago

                      Time services can help with these sorts of things. They aren’t notarizing the message. You don’t trust the service to validate who wrote it or who sent it, you just trust that it saw these bytes at this time.

                      • catlifeonmars 8 hours ago

                        Something that maintains a mapping between a signature+domain and the earliest seen timestamp for that combination? I think at that point the time service becomes a viable aggregated index for readers who use to look for updates. I think this also solves the problem for lowering the cost of participation… since the index would only store a small amount of data per-post, and since indexes can be composed by the reader, it could scale cost effectively.

                        • hinkley 3 hours ago

                          I’ve only briefly worked with these but got a rundown from someone more broadly experienced with them. Essentially you treat trust as a checklist. I accept this message (and any subsequent transactions implied by its existence) if it comes from the right person, was emitted during the right time frame (whether I saw it during a separate time frame), and <insert other criteria here>. If I miss the message due to transmission errors or partitioning, I can still honor it later even though it now changes the consequences of some later message I can now determine to have arrived out of order.

                      • imglorp 6 hours ago

                        Recent events also taught us that proof of work is a serious problem for the biosphere when serious money is involved and everybody scales up. Instead, it seems proof of stake is more what is required.

                        • wmf 3 hours ago

                          Yeah, a verifiable delay function is probably better for timestamping.

                        • evbogue 9 hours ago

                          If the data is a signed hash, why does it need the domain name requirement? One can host self-authenticating content in many places.

                          And one can host many signing keys at a single domain.

                          • catlifeonmars 8 hours ago

                            In the article, the main motivation for requiring a domain name, is to raise the barrier to entry above “free” to mitigate spamming/abuse.

                            • uzyn 6 hours ago

                              A 1-time fixed cost will not deter spam, it only encourages more spamming to lower the averaged per-spam cost. Email spamming requires some system set up, that's a 1-time fixed cost above $10/year but it does not stop spam.

                            • wmf 8 hours ago

                              One person per domain is essentially proof of $10.

                              • hinkley 2 hours ago

                                There was a psychological study that decided that community moderation tends to be self healing if, and only if, punishing others for a perceived infraction comes at a cost to the punisher.

                                I believe I have the timeline right that this study happened not too long before StackOverflow got the idea that getting upvoted gives you ten points and downvoting someone costs you two. As long as you’re saying something useful occasionally instead of disagreeing with everyone else, your karma continues to rise.

                          • neuroelectron 2 hours ago

                            I think it's pretty clear they don't want us to have such a protocol. Google's attack on RSS is probably the clearest example of this, but there's also several more foundational issues that prevent multicasts and similar mechanisms from being effective.

                            • teddyh 10 hours ago

                              Is he reinventing USENET netnews?

                              • bb88 8 hours ago

                                Yes and no. I think the issue primarily is that I could never just generate a new newsgroup back when usenet was popular and get it to syndicate with other servers.

                                The other issue is who's going to host it? I need a port somehow (CGNAT be damned!).

                                • hinkley 9 hours ago

                                  Spam started on Usenet. As did Internet censorship. You can’t just reinvent Usenet. Or we could all just use Usenet.

                                  • stackghost 6 hours ago

                                    >Or we could all just use Usenet.

                                    Usenet doesn't scale. The Eternal September taught us that.

                                    To being Usenet back into the mainstream would require a major protocol upgrade, to say nothing of the seismic social shift.

                                    • hinkley 3 hours ago

                                      That’s also my feeling. There’s a space for something that has some of the same goals as Usenet while also learning from the past.

                                      I don’t think it’s a fruitful or useful comment to say something is “like Usenet” as a dismissal. So what if it is? It was useful as hell when it wasn’t terrible.

                                • pfraze 10 hours ago

                                  Atproto supports deletes and partial syncs

                                  • fiatjaf 10 hours ago

                                    Nostr is kind of what you're looking for.

                                    • doomroot 6 hours ago

                                      My thought as well.

                                      ps When is your SC podcast coming back?

                                    • est 6 hours ago

                                      Pity RSS is one-way. There's no standard way of comment or doing iteractions.

                                      • uzyn 6 hours ago

                                        Interaction/comment syndication would be very interesting. This is, I feel, what makes proprietary social media so addictive.

                                        • hinkley 3 hours ago

                                          Someone on the NCSA Mosaic team had a similar idea, but after they left nobody remaining knew what to do with it or how it worked.

                                          It took me 20 years to decide maybe they were right. A bunch if Reddits more tightly associated with a set of websites and users than with a centralized ad platform would be fairly good - if you had browser support for handling the syndicated comments. You could have one for your friends or colleagues, one for watchdog groups to discuss their fact checking or responses to a new campaign by a troublesome company.

                                          • sali0 4 hours ago

                                            Its an interesting point. I haven't even read the article yet, but have been reading the comments. Maybe they were the star of the show all along.

                                        • hkt 10 hours ago

                                          https://en.wikipedia.org/wiki/Syndie was a decent attempt at this which is, I gather, still somewhat alive.

                                          • cyberax 6 hours ago

                                            That is a really great list of requirements.

                                            One area that is overlooked is commercialization. I believe, that the decentralized protocol needs to support some kind of paid subscription and/or micropayments.

                                            WebMonetization ( https://webmonetization.org/docs/ ) is a good start, but they're not tackling the actual payment infrastructure setup.

                                            • convolvatron 10 hours ago

                                              alot of the use cases for this would have been covered by protocol designs suggested by Floyd, Jacobson and Zhang in https://www.icir.org/floyd/papers/adapt-web.pdf

                                              but it came right at a time when the industry had kind of just stopped listening to that whole group, and it was built on multicast, which was a dying horse.

                                              but if we had that facility as a widely implemented open standard, things would be much different and arguably much better today.

                                              • rapnie 3 hours ago

                                                > built on multicast, which was a dying horse.

                                                There's a fascinating research project Librecast [0], funded by the EU via NLnet, that may boost multicast right into modern tech stacks again.

                                                [0] https://www.librecast.net/about.html

                                              • toomim 9 hours ago

                                                I am working on something like this. If you are, too, please contact me! toomim@gmail.com.

                                                • evbogue 8 hours ago

                                                  I'm working on something like this too! I emailed you.

                                                • Uptrenda 9 hours ago

                                                  >Everybody has to host their own content

                                                  Yeah, this won't work. Like at all. This idea has been tried over and over on various decentralized apps and the problem is as nodes go offline and online links quickly break...

                                                  No offense but this is a very half-assed post to gloss over what has been one of the basic problems in the space. It's a problem that inspired research in DHTs, various attempts at decentralized storage systems, and most recently -- we're getting some interesting hybrid approaches that seem like they will actually work.

                                                  >Domain names should be decentralized IDs (DIDs)

                                                  This is a hard problem by itself. All the decentralized name systems I've seen suck. People currently try use DHTs. I'm not sure that a DHT can provide reliability though and since the name is the root of the entire system it needs to be 100% reliable. In my own peer-to-peer work I side-step this problem entirely by having a fixed list of root servers. You don't have to try "decentralize" everything.

                                                  >Proof of work time IDs can be used as timestamps

                                                  Horribly inefficient for a social feed and orphans are going to screw you even more.

                                                  I think you've not thought about this very hard.

                                                  • catlifeonmars 8 hours ago

                                                    > In my own peer-to-peer work I side-step this problem entirely by having a fixed list of root servers. You don't have to try "decentralize" everything.

                                                    Not author, but that is what the global domain system is. There are a handful of root name servers that are baked into DNS resolvers.

                                                    • Uptrenda 43 minutes ago

                                                      You're exactly right. It seems to work well enough for domains already so I kept the model.