« BackTriptych Proposalsalexanderpetros.comSubmitted by felipemesquita 2 days ago
  • alexpetros 2 days ago

    Co-author here! I'll let the proposal mostly speak for itself but one recurring question it doesn't answer is: "how likely is any of this to happen?"

    My answer is: I'm pretty optimistic! The people on WHATWG have been responsive and offered great feedback. These things take a long time but we're making steady progress so far, and the webpage linked here will have all the status updates. So, stay tuned.

    • ksec a day ago

      Thank You for the work. It is tedious and takes a long time. I know we are getting some traction on WHATWG.

      But do we have if Google or Apple have shown any interest? At the end you could still end up being on WHATWG and Chrome / Safari not supporting it.

      • theptip a day ago

        How much would HTMX internals change if these proposals were accepted? Is this a big simplification or a small amount of what HTMX covers?

        Similarly, any interesting ways you could see other libraries adopting these new options?

        • recursivedoubts a day ago

          i don't think it would change htmx at all, we'd probably keep the attribute namespaces separate just to avoid accidentally stomping on behavior

          i do think it would reduce and/or eliminate the need for htmx in many cases, which is a good thing: the big idea w/htmx is to push the idea of hypermedia and hypermedia controls further, and if those ideas make it into the web platform so much the better

          • paulddraper a day ago

            This covers a lot of the common stuff.

            This is native HTMX, or at least a good chuck of the basics.

          • philosopher1234 a day ago

            Is it possible to see their feedback? Is it published somewhere public?

          • divbzero 2 days ago

            When I was reading “The future of htmx” blog post which is also being discussed on HN [1], the “htmx is the new jQuery” idea jumped out at me. Given that jQuery has been gradually replaced by native JavaScript [2], I wondered what web development could look like if htmx is gradually replaced by native HTML.

            Triptych could be it, and it’s particularly interesting that it’s being championed by the htmx developers.

            [1]: https://news.ycombinator.com/item?id=42613221

            [2]: https://youmightnotneedjquery.com/

          • recursivedoubts 2 days ago

            this is a set of proposals by Alex Petros, in the htmx team, to move some of the ideas of htmx into the HTML spec. He has begun work on the first proposal, allowing HTML to access PUT, DELETE, etc.

            https://alexanderpetros.com/triptych/form-http-methods

            This is going to be a long term effort, but Alex has the stubbornness to see it through.

            • croemer a day ago

              Congrats, you seem to be a co-author of the proposal as well, right?

              • recursivedoubts a day ago

                i help alex out a bit, but he's the main author

            • emmanueloga_ a day ago

              In the meanwhile, I found that enabling page transitions is a progressive enhancement tweak that can go a long way in making HTML replacement unnecessary in a lot of cases.

              1) Add this to your css:

                  @view-transition { navigation: auto; }
              
              2) Profit.

              Well, not so fast haha. There are a few details that you should know [1].

              * Firefox has not implemented this yet but it seems likey they are working on it.

              * All your static assets need to be properly cached to make the best use of the browser cache.

              Also, prefetching some links on hover, like those on a navbar, is helpful.

              Add a css class "prefetch" to the links you want to prefetch, then use something like this:

                  document.addEventListener("mouseover", ({target}) => {
                    if (target.tagName !== "A" || !target.classList.contains("prefetch")) return;
                    target.classList.remove("prefetch");
              
                    const linkElement = document.createElement("link");
                    linkElement.rel = "prefetch";
                    linkElement.href = target.getAttribute("href");
                    document.head.appendChild(linkElement);
                  })
              
              There's more work on prefetching/prerendering going on but it is a lil green (experimental) at the moment [2].

              --

              1: https://developer.mozilla.org/en-US/docs/Web/CSS/@view-trans...

              2: https://developer.mozilla.org/en-US/docs/Web/API/Speculation...

              • alexpetros a day ago

                In many cases, browsers will also automatically perform a "smooth" transition between pages if your caching settings are don well, as described above. It's called paint holding. [0]

                One of the driving ideas behind Triptych is that, while HTML is insufficient in a couple key ways, it's a way better foundation for your website than JavaScript, and it gets better without any effort from you all the time. In the long run, that really matters. [1]

                [0] https://developer.chrome.com/blog/paint-holding [1] https://unplannedobsolescence.com/blog/hard-page-load/

              • tinthedev 2 days ago

                It looks wonderful, but the adoption will be a thoroughly uphill battle. Be it from browsers, be it from designs and implementations on the web.

                I'll be first in line to try it out if it ever materializes, though!

                • Dan42 14 hours ago

                  No, please, just no.

                  The idea of using PUT, DELETE, or PATCH here is entirely misguided. Maybe it was a good idea, but history has gone in a different direction so now it's irrelevant. About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems. This inconsistency led to unpredictable failures, varying by website, network, and the specific proxy or caching software in use.

                  The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.

                  Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing. The entire internet infrastructure operates on these semantics, with little to no consideration for other HTTP verbs. Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.

                  Please let's just use what already works. GET for reading, POST for writing. That’s all we need to define transport behavior. Any further differentiation—like what kind of read or write—is application-specific and should be decided by the endpoints themselves.

                  Even the <form> element’s "action" attribute is built for this simplicity. For example, if your resource is /tea/genmaicha/, you could use <form method="post" action="brew">. Voilà, relative URLs in action! This approach is powerful, practical, and aligned with the infrastructure we already rely on.

                  Let’s not overcomplicate things for the sake of theoretical perfection. KISS.

                  • alexpetros 10 hours ago

                    > About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems.

                    This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]

                    > The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.

                    This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]

                    > Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.

                    What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.

                    > Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.

                    You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.

                    [0] https://alexanderpetros.com/triptych/form-http-methods#ref-6

                    [1] https://alexanderpetros.com/triptych/form-http-methods#rest-...

                    [2] https://alexanderpetros.com/triptych/form-http-methods#usage...

                    [3] https://alexanderpetros.com/triptych/form-http-methods#appli...

                    • Dan42 25 minutes ago

                      > This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]

                      I see no such thing in the link you have there. #ref-6 starts with:

                      > [6] On 01/12/2011, at 9:57 PM, Julian Reschke wrote: "One thing I forgot earlier, and which was the reason

                      But the link you have there [1] does not contain any such comment. Wrong link?

                      [1] https://lists.w3.org/Archives/Public/public-html-comments/20...

                      (will reply to other points as time allows, but I wanted to point out this first)

                  • ttymck 2 days ago

                    Looks really pragmatic and I'd be glad to see this succeed.

                    Is anyone able to credibly comment on the likelihood that these make it into the standard, and what the timeline might look like?

                    • recursivedoubts 2 days ago

                      Alex is working on it now and we have contacts in the browser teams. I’m optimistic but it will be a long term (decades) project.

                    • KronisLV 2 days ago

                      Good luck!

                      The partial page replacement in particular sounds like it might be really interesting and useful to have as a feature of HTML, though ofc more details will emerge with time.

                      Unless it ended up like PrimeFaces/JSF where more often than not you have to finagle some reference to a particular table row in a larger component tree, inside of an update attribute for some AJAX action and still spend an hour or two debugging why nothing works.

                      • mg 2 days ago

                        What is the upside of

                            <button action="/users/354" method="DELETE"></button>
                        
                        over

                            <button action="/users/delete?id=354"></button>
                        
                        ?
                        • bryanrasmussen a day ago

                          Everybody has already pointed out the problem with GETTING a deletable resource, but I figured I would add this (and maybe someone will remember extra specifics).

                          About 2007 or so there was a case where a site was using GET to delete user accounts, of course you had to be logged in to the site to do it so what was the harm the devs thought, however a popular extension made by Google for Chrome started prefetching GET requests for the users - so coming in to the account page where you could theoretically delete your account ended up deleting the account.

                          It was pretty funny, because I wasn't involved in either side of the fight that ensued.

                          I would provide more detail than that, but I'm finding it difficult to search for it, I guess Google has screwed up a lot of other stuff since then.

                          on edit: my memory must be playing tricks on me, I think it had to be more around 2010, 2011 that this happened, at first I was thinking it happened before I started working in Thomson Reuters but now I think it must have happened within the first couple years there.

                        • thayne 2 days ago

                          A GET request to `/users/delete?id=354` is dangerous. In particular, it is more vulnerable to a CSRF attack, since a form on another domain can just make a request to that endpoint, using the user's cookies.

                          It's possible to protect against this using various techniques, but they all add some complexity.

                          Also, the former is more semantically correct in terms of HTTP and REST.

                          • hnbad a day ago

                            An important consideration is also that browsers may prefetch GET requests.

                          • alexpetros 2 days ago

                            Hey there, good question! Probably worth reading both sections 6 and 7 for context, but I answer this question specifically in section 7.2: https://alexanderpetros.com/triptych/form-http-methods#ad-ho...

                            • croemer a day ago

                              HTTP/1.1 spec, section 9.1.1 Safe Methods:

                              > Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.

                              > In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval.

                              See the "GET scenario" section of https://owasp.org/www-community/attacks/csrf to learn why ignoring the HTTP spec can be dangerous.

                              Or this blog post: https://knasmueller.net/why-using-http-get-for-delete-action...

                              • AndrewHampton 2 days ago

                                What HTTP method would you expect the second example to use? `GET /users/delete?id=354`?

                                The first has the advantage of being a little clearer at the HTTP level with `DELETE /users/354`.

                                • mg 2 days ago

                                  GET because that is also the default for all other elements I think. form, a, img, iframe, video...

                                  Ok, but what is the advantage to be "clear at the http level"?

                                  • necubi 2 days ago

                                    GET shouldn't be used for a delete action, because it's specified as a safe method[0], which means essentially read-only. On a practical level, clients (like browsers) are free to cache and retry GET requests, which could lead to deletes not occurring or occurring when not desired.

                                    [0] https://datatracker.ietf.org/doc/html/rfc7231#section-4.2.1

                                    • JimDabell a day ago

                                      That means I can make you delete things by embedding that delete URL as the source of an image on a page you visit.

                                      GET is defined to be safe by HTTP. There have been decades of software development that have happened with the understanding that GETs can take place without user approval. To abuse GET for unsafe actions like deleting things is a huge problem.

                                      This has already happened before in big ways. 37Signals built a bunch of things this way and then the Google Web Accelerator came along, prefetching links, and their customers suffered data loss.

                                      When they were told they were abusing HTTP, they ignored it and tried to detect GWA instead of fixing their bug. Same thing happened again, more things deleted because GET was misused.

                                      GET is safe by definition. Don’t abuse it for unsafe actions.

                                      • mg a day ago

                                        You can already do POST requests by embedding forms and/or JS. And with the proposed <button method=DELETE> you could also embed that. So I don't see how the proposal of adding more HTTP methods to html elements prevents abuse.

                                        • williamdclt a day ago

                                          I think you're misunderstanding what your parent meant by "abuse".

                                          In in this context it meant "misuse", there's no malicious actor involved. GET should have no side-effect which enables optimisation like prefetching or caching: they used it for an effectful operation (deletion) so prefetching caused a bug. It's the developers fault, for not respecting the guarantees expected from GET.

                                          If they'd used POST, everything would have been fine. There's much less of an argument for using `POST /whatever/delete` rather than `DELETE /whatever`. At this point it's a debate on whether REST is a good fit or not for the application.

                                          • masklinn a day ago

                                            It prevents "required" abuse of the HTTP protocol (having to pipeline everything via POST even though that's not its purpose), without the requirement of adding javascript to the page.

                                        • lionkor 2 days ago

                                          Well, its correct, so its likely to be optimized correctly, to aid in debugging, to make testing easier and clearer, and generally just to be correct.

                                          Correctness is very rarely a bad goal to have.

                                          Also, of course, different methods have different rules, which you know as an SE. For example, PUT, UPDATE and DELETE have very different semantics in terms of repeatability of requests, for example.

                                          • recursive 2 days ago

                                            GETs have no side effects, by specification. DELETEs can have side effects.

                                        • recursivedoubts 2 days ago

                                          implied idempotence

                                          • LegionMammal978 a day ago

                                            I'd say deleting a user is pretty idempotent: deleting twice is the same as deleting once, as long as you aren't reusing IDs or something silly like that. It's more that GET requests shouldn't have major side effects in the first place.

                                        • andrewflnr 2 days ago

                                          > giving buttons to ability

                                          Might want to fix that. :)

                                          • tln 2 days ago

                                            I haven't seen the proposal, but buttons can already set the form method (and action, and more). So I guess the "Button HTTP Requests" will just save the need to nest one tag?

                                                <form><button type="submit" formaction="/session" formmethod="DELETE"></form>
                                                <form action="/session" method="DELETE"><button type="submit"></form>
                                            • andrewflnr 2 days ago

                                              To be clear, I was referring to the minor typo.

                                              • jjcm 2 days ago

                                                This proposal also includes the ability to update a target DOM element with the response from that delete action.

                                            • undefined a day ago
                                              [deleted]
                                              • Devasta 2 days ago

                                                Its genuinely incredible that we are more than 20 years since the takeover of HTML from the W3C and there isn't anything in the browser approaching even one tenth of the capability of XForms.

                                                I wish the people behind this initiative luck and hope they succeed but I don't think it'll go anywhere; the browser devs gave up on HTML years ago, JavaScript is the primary language of the web.

                                                • undefined 2 days ago
                                                  [deleted]
                                                  • netcraft 2 days ago

                                                    I love these. Its the things we've been doing (or attempting to do) with our web pages for decades. I've written tons of jquery to do these exact things, lots of react code as well.

                                                    I think its an uphill battle, but I am hopeful.

                                                    • unit149 a day ago

                                                      [dead]

                                                      • unit149 2 days ago

                                                        [dead]

                                                        • motoboi 2 days ago

                                                          JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.

                                                          It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.

                                                          The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.

                                                          • recursivedoubts 2 days ago

                                                            No, they aren’t.

                                                            https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...

                                                            Your client is already generic you just aren’t using that functionality:

                                                            https://htmx.org/essays/hypermedia-clients/

                                                            • colordrops 2 days ago

                                                              Maybe just give up the ghost and use a new unambiguous term instead of REST. Devs aren't going to let go of their JSON apis as browsers often are not the only, or even main, consumers of said APIs.

                                                              Creating frameworks and standards to support "true" RESTful APIs is a noble goal, but for most people it's a matter of semantics as they aren't going to change how they are doing things.

                                                              A list of words that have changed meaning, sometimes even the opposite, of their original meaning:

                                                              https://ideas.ted.com/20-words-that-once-meant-something-ver...

                                                              It seems these two discussions should not be conflated: 1. What RESTful originally meant, and 2. The value of RESTful APIs and when they should and shouldn't be used.

                                                              • WorldMaker a day ago

                                                                I think "both sides" right now are a bit wrong about the other. Content Negotiation is also an ancient part of REST. User Agent A prefers HTML and User Agent B prefers XML and User Agent C prefers JSON and all of those are still valid representations of a resource, and a good REST API can deliver some or all three as it has capability to do so. It's very much in the spirit of REST to provide HTML for your Browsers and JSON for your CLI apps and machine-to-machine calls.

                                                                This shouldn't be a "war" between "HTML is the original REST" and "JSON is what everyone means today by REST", this should be a celebration together that if these proposals pass we can do both better together. Let User Agents negotiate their content better again. It's good for JSON APIs if the Browser User Agents "catch up" to more features of REST. The JSON APIs can sometimes better specialize in the things their User Agents need/prefer if they aren't also doing all the heavy lifting for Browsers, too. It's good for the HTML APIs if they can do more of what they were originally intended to do and rely on JS less. Servers get a little more complicated again, if they are doing more Content Negotiation, but they were always that complicated before, too.

                                                                REST says "resources" it doesn't say what language those resources are described in and never has. REST means both HTML APIs and JSON APIs. (Also, XML APIs and ASN.1 APIs and Protobuf APIs and… There is no "one true" REST.)

                                                                • alexpetros 10 hours ago

                                                                  In the near future I'll write a blog about this, but the short answer is that even though more developers use REST incorrectly than not, it's still the term that best communicates our intent to the audience we are trying to reach.

                                                                  Eventually, I would like that audience to be "everyone," but for the time being, the simplest and clearest way to build on the intellectual heritage that we're referencing is to the use the term the same way they did. I benefited from Carson's refusal to let REST mean the opposite of REST, just as he benefited from Martin Fowler's usage of the term, who benefited from Leonard's Richardson's, who benefited from Roy Fielding's.

                                                                  • thatsafeature2 2 days ago

                                                                    RESTful is an oddly-specific term, so I don't see the point of changing the meaning.

                                                                    Feel free to change the meaning of 'agile' to mean 'whatever' (which is how it's interpreted by 99.99% of the population), but leave things like RESTful alone.

                                                                    Signed, CEO of htmx

                                                                    • BlueTemplar 2 days ago

                                                                      It's weird that you would argue "people won't change" at the same time as you point out how word meanings change.

                                                                      Have you forgotten how XML was all the rage not that long ago ?

                                                                      Also, specific people might not change, but they do retire/die, and new generations might have different opinions...

                                                                      • quuxplusone a day ago

                                                                        > It's weird that you would argue "people won't change" at the same time as you point out how word meanings change.

                                                                        "People won't change" does not imply "people don't change"; "I observe change" does not imply "I cause change."

                                                                        Dante's Paradiso XVII.37–42 (Hollander translation): "Contingent things [...] are all depicted in the Eternal Sight, / yet are by that no more enjoined / than is a ship, moved downstream on a river's flow, / by the eyes that mirror it."

                                                                        > Also, specific people might not change, but they do retire/die, and new generations might have different opinions.

                                                                        Yes, that's certainly the case. "Science advances one funeral at a time." https://en.wikipedia.org/wiki/Planck%27s_principle

                                                                        • colordrops a day ago

                                                                          People change organically but people can't be easily changed inentionally.

                                                                          I'm not suggesting that going back to the original meaning is a bad thing, in fact more power to those who are attempting this. I'm just suggesting that instead of moving the mountain, they could just go around it.

                                                                    • alexpetros 2 days ago

                                                                      It sounds like the client you're describing is less capable than the client of 2005, and I'd be curious to hear why you think that's a good thing.

                                                                      • cryptonector 2 days ago

                                                                        The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API. Hypermedia is completely self-describing when you have human users involved but not when you don't have human users involved. If you insist on HATEOAS as the substrate of the API then you need to give us a schema language that we can use to automate things. Then you can have your hypermedia as part of the UI and the API.

                                                                        The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication. You can cure that code duplication by just using the API from JavaScript on the user-agent to render the UI from data, and now you're essentially using something like a schema but with hand-compiled codecs to render the UI from data.

                                                                        Even if you go with hypermedia, using that as your API is terribly inefficient in terms of bandwidth for bulk data, so devs invariably don't use HTML or XML or any hypermedia for bulk data. If you have a schema then you could "compress" (dehydrate) that data using something not too unlike FastInfoSet by essentially throwing away most of the hypermedia, and you can re-hydrate the hypermedia where you need it.

                                                                        So I think GP is not too far off. If we defined schemas for "pages" and used codecs generated or interpreted from those schemas then we could get something close to ideal:

                                                                          - compression (though the
                                                                            data might still be highly
                                                                            compressible with zlib/
                                                                            zstd/brotli/whatever,
                                                                            naturally)
                                                                        
                                                                          - hypermedia
                                                                        
                                                                          - structured data with
                                                                            programmatic access methods
                                                                            (think XPath, JSONPath, etc.)
                                                                        
                                                                        The cost of this is: a) having to define a schema for every page, b) the user-agent having to GET the schema in order to "hydrate" or interpret the data. (a) is not a new cost, though a schema language understood by the user-agent is required, so we'd have to define such a language and start using it -- (a) is a migration cost. (b) is just part of implementing in the user-agent.

                                                                        This is not really all that crazy. After all XML namespaces and Schemas are already only referenced in each document, not in-lined.

                                                                        The insistence on purity (HTML, XHTML, XML) is not winning. Falling back on dehydration/hydration might be your best bet if you insist.

                                                                        Me, I'm pragmatic. I don't mind the hydration codec being written in JS and SPAs. I mean, I agree that it would be better if we didn't need that -- after all I use NoScript still, every day. But in the absence of a suitable schema language I don't really see how to avoid JS and SPAs. Users want speed and responsiveness, and devs want structured data instead of hypermedia -- they want structure, which hypermedia doesn't really give them.

                                                                        But I'd be ecstatic if we had such a schema language and lost all that JS. Then we could still have JS-less pages that are effectively SPAs if the pages wanted to incorporate re-hydrated content sent in response to a button that did a GET, say.

                                                                        • recursivedoubts 2 days ago
                                                                          • cryptonector 2 days ago

                                                                            SPAs are for humans, but they let you have structured data.

                                                                            That's the problem here. People need APIs, which means not-for-humans, and so to find an efficient way to get "pages" for humans and APIs for not-humans they invented SPAs that transfer data in not-for-humans encodings and generate or render it from/to UIs for humans. And then the intransigent HATEOAS boosters come and tell you "that's not RESTful!!" "you're misusing the term!!", etc.

                                                                            Look at your response to my thoughtful comment: it's just a dismissive one-liner that helps no one and which implicitly says "though shall have an end-point that deals in HTML and another that deals in JSON, and though shall have to duplicate effort". It comes across as flippant -- as literally flipping the finger[0].

                                                                            No wonder the devs ignore all this HATEOAS and REST purity.

                                                                            [0] There's no etymological link between "flippant" and "flipping the finger", but the meanings are similar enough.

                                                                            • recursivedoubts 2 days ago

                                                                              Yeah, that was too short a response, sorry I was bouncing around a lot in the thread.

                                                                              The essay I linked to somewhat agrees w/your general point, which is that hypermedia is (mostly) wasted on automated consumers of REST (in the original sense) APIs.

                                                                              I don't think it's a bad thing to split your hypermedia API and your JSON API:

                                                                              https://htmx.org/essays/splitting-your-apis/

                                                                              (NB, some people recommend even splitting your JSON-for-app & JSON-for-integration APIs: https://max.engineer/server-informed-ui)

                                                                              I also don't think it's hard to avoid duplicating your effort, assuming you have a decent model layer:

                                                                              https://htmx.org/essays/mvc/

                                                                              As far as efficiency goes, HTML is typically within spitting distance of JSON particularly if you have compression enabled:

                                                                              https://github.com/1cg/html-json-size-comparison

                                                                              And is also may be more efficient to generate because it isn't using reflection:

                                                                              https://github.com/1cg/html-json-speed-comparison

                                                                              (Those costs will typically be dwarfed by data store access anyway)

                                                                              So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.

                                                                              Hope that's more useful.

                                                                              • cryptonector 2 days ago

                                                                                Thanks, I appreciate the detailed response.

                                                                                > So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.

                                                                                I've yet to see what I proposed, so I've no idea how it would work out. Given the current state of the world I think devs will continue to write JS-dependent SPAs that use JSON APIs. Grandstanding about the meaning of REST is not going to change that.

                                                                                • recursivedoubts 2 days ago

                                                                                  I've built apps w/ hypermedia APIs & JSON APIs for automation, which is great because the JSON API can stay stable and not get dragged around by changes in your application.

                                                                                  As far as the future, we'll see. htmx (and other hypermedia-oriented libraries, like unpoly, hotwire, data-star, etc) is getting some traction, but I think you are probably correct that fixed-format JSON APIs talking to react front-ends is going to be the most common approach for the foreseeable future.

                                                                                  • cryptonector 2 days ago

                                                                                    If you want JS-lesness and HATEOASnes then maybe if we had an automatic way to go from structured data to HTML... :)

                                                                                    • recursivedoubts 2 days ago

                                                                                      most structured-data to UI systems I have seen produce pretty bad, generic user interfaces

                                                                                      the innovation of hypermedia was mixing presentation information w/control information (hypermedia controls) to produce a user interface (distributed control information, in the case of the web)

                                                                                      i think that's an interesting and crucial aspect of the REST network architecture

                                                                                      • cryptonector 20 hours ago

                                                                                        What I have in mind is something like this:

                                                                                        1) you write your web page in HTML

                                                                                        2) where you fetch data from a server and would normally use JS to render it you'd instead have an HTML attribute naming the "schema" to use to hydrate the data into HTML which would happen automatically, with the hydrated HTML incorporated into the page at some named location.

                                                                                        The schema would be something like XSLT/XPath, but perhaps simpler, and it would support addressing JSON/CBOR data.

                                                                                        • recursivedoubts 19 hours ago

                                                                                          this sounds like client side templating to me (some annotated HTML that is "hydrated" from a server) but attached directly to a JSON api rather than having a reactive model

                                                                                          if you have a schema then you are breaking the uniform interface of REST: the big idea with REST is that the client (that is, the browser) doesn't know or care what a given end point returns structurally: it just knows that it's hypermedia and it can render the content and all the hypermedia controls in that content to the user

                                                                                          the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas

                                                                                          REST (original sense) does couple your responses to your UI, however, in that your responses are your UI, see https://htmx.org/essays/two-approaches-to-decoupling/

                                                                                          I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.

                                                                                          • cryptonector 17 hours ago

                                                                                            > if you have a schema then you are breaking the uniform interface of REST: the big idea with REST is that the client (that is, the browser) doesn't know or care what a given end point returns structurally: it just knows that it's hypermedia and it can render the content and all the hypermedia controls in that content to the user

                                                                                            This doesn't follow. Why is rendering one thing that consists of one document versus another thing that consists of two documents so different that one is RESTful and the other is not?

                                                                                            > this sounds like client side templating to me (some annotated HTML that is "hydrated" from a server) but attached directly to a JSON api rather than having a reactive model

                                                                                            I wouldn't call it templating. It resembles more a stylesheet -- that's why I referenced XSLT/XPath. Browsers already know how to apply XSLT even -- is that unRESTful?

                                                                                            > the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas

                                                                                            Nonsense. The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server", it's not coupling anything. It's a compression technique of sorts, and mainly one that allows one to reuse API end-points in the UI.

                                                                                            EDIT: Sending the data and the instructions for how to present it separately is no more non-RESTful than using CSS and XML namespaces and Schema and XSLT are.

                                                                                            I think you're twisting REST into pretzels.

                                                                                            > REST (original sense) does couple your responses to your UI, however, in that your responses are your UI, see https://htmx.org/essays/two-approaches-to-decoupling/

                                                                                            How is one response RESTful and two responses not RESTful when the user-agent performs the two requests from a loaded page?

                                                                                            > I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.

                                                                                            You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.

                                                                                            • recursivedoubts 12 hours ago

                                                                                              > This doesn't follow. Why is rendering one thing that consists of one document versus another thing that consists of two documents so different that one is RESTful and the other is not?

                                                                                              Two documents (requests) vs one request has nothing to do with anything: typical HTML documents make multiple requests to fully resolve w/images etc. What does bear on if a system is RESTful is if an API end point requires an API-specific schema to interact with.

                                                                                              > Browsers already know how to apply XSLT even -- is that unRESTful?

                                                                                              XSLT has nothing to do with REST. Neither does CSS. REST is a network architecture style.

                                                                                              > The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server"...

                                                                                              I guess I'd need to see where the hypermedia controls are located: if they are in the "data" request or in the "html" request. CSS doesn't carry any hypermedia control information, both display and control (hypermedia control) data is in the HTML itself, which is what makes HTML a hypermedia. I'd also need to see the relationship between the two end points, that is, how information in one is consumed/referenced from the other. (Your mention of the term 'schema' is why I'm skeptical, but a concrete example would help me understand.)

                                                                                              If the hypermedia controls are in the data then I'd call that potentially a RESTful system in the original sense of that term, i'd need to see how clients work as well in consuming it. (See https://htmx.org/essays/hypermedia-clients/)

                                                                                              > You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.

                                                                                              When discussing REST i think it's reasonable to link to the paper that defined the term. With Fielding, who defined the term, I regard the uniform interface as the most distinguishing technical characteristic of REST. In as much as a proposed system satisfies that (and the other REST constraints) I'm happy to call it RESTful.

                                                                                              In any event, I think some concrete examples (maybe a gist?) would help me understand what you are proposing.

                                                                                              • cryptonector 10 hours ago

                                                                                                > Two documents (requests) vs one request has nothing to do with anything: typical HTML documents make multiple requests to fully resolve w/images etc. What does bear on if a system is RESTful is if an API end point requires an API-specific schema to interact with.

                                                                                                It's an API-specific schema, yes, but the browser doesn't have to know it because the API-to-HTML conversion is encoded in the second document (which rarely changes). I.e., notionally the browser only deals in the hydrated HTML and not in the API-specific schema. How does that make this not RESTful?

                                                                                                • recursivedoubts an hour ago

                                                                                                  Well, again I'm not 100% saying it isn't RESTful, I would need to see an example of the whole system to determine if the uniform interface (and the other constraints of REST) are satisfied. That's why i asked for an example showing where the hypermedia controls are located, etc. so we can make an objective analysis of the situation. REST is a network architecture and thus we need to look at the entire system to determine if it is being satisfied (see https://hypermedia.systems/components-of-a-hypermedia-system...)

                                                                                • eadmund 2 days ago

                                                                                  > I think you can profitably split your application API (hypermedia) from your automation API (JSON)

                                                                                  Why split them? Just support multiple representations: HTML and JSON (and perhaps other, saner representations than JSON …) and just let content negotiation sort it all out.

                                                                          • deniz-a 2 days ago

                                                                            The "structured data with programmatic access methods" sounds a lot like microformats2 (https://microformats.org/wiki/microformats2), which is being used quite successfully in the IndieWeb community to drive machine interactions with human websites.

                                                                            • cryptonector 20 hours ago

                                                                              Thanks for the link!

                                                                            • riwsky 2 days ago

                                                                              > The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API

                                                                              We already have the schema language; it’s HTML. Stylesheets and favicons are two examples of specific links that are automatically interpreted by user-agents. Applications are free to use their own link rels. If your point is that changing the name of those rels could break automation that used them, in a way that wouldn’t break humans…then the same is true of JSON APIs as well.

                                                                              Like, the flaws you point out are legit—but they are properties of how devs are ab/using HTML, not the technology itself.

                                                                              • cryptonector a day ago

                                                                                HTML is most decidedly not a schema language.

                                                                              • alexpetros 2 days ago

                                                                                > The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication.

                                                                                What code duplication? If both these APIs use the same data fetching layer, there's no code duplication; if they don't, then it's because the JSON API and the Hypermedia UI have different requirements, and can be more efficiently implemented if they don't reuse each other's querying logic (usually the case).

                                                                                What you want is some universal way to write them both, and my general stance is that usually they have different requirements, and you'll end up writing so much on top of that universal layer that you might as well have just skipped it in the first place.

                                                                                • cryptonector 2 days ago

                                                                                  I've worked with a HATEOAS database application written in Ruby using Rails. It still managed to have HTML- and JSON-specific code. Its web UI was OK, but not as responsive as a SPA.

                                                                            • cryptonector 2 days ago

                                                                              > JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.

                                                                              Yes.

                                                                              > It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.

                                                                              No. Where the client is a user-agent browser sort of application then it has to be generic.

                                                                              > The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.

                                                                              Yes-ish. If instead of a hand-coded "re-hydrator" library you had a schema a schema whose metaschema is supported by the user-agent, then everything would be better because

                                                                              a) you'd have less code,

                                                                              b) need a lot less dev labor (because of (a), so I repeat myself),

                                                                              c) you get to have structured data APIs that also satisfy the HATEOAS concept.

                                                                              Idk if TFA will like or hate that, but hey.