• randallsquared 2 minutes ago

    I don't really understand why you'd bring up DELETE in a passage about idempotence and not PUT.

    • dwattttt 13 hours ago

      The reminder to "never break userspace" is good, but people never bring up the other half of that statement: "we can and will break kernel APIs without warning".

      It illustrates that the reminder isn't "never change an API in a way that breaks someone", it's the more nuanced "declare what's stable, and never break those".

      • brainzap a few seconds ago

        makes me remember a Evan: we provided a migration path from 2 to 3, but so many internal changed that many plugins broke

        • delta_p_delta_x 13 hours ago

          Even if the kernel doesn't break userspace, GNU libc does, all the time, so the net effect is that Linux userspace is broken regardless of the kernel maintainers' efforts. Put simply, programs and libraries compiled on/for newer libc are ABI-incompatible or straight-up do not run on older libc, so everything needs to be upgraded in lockstep.

          It is a bit ironic and a little funny that Windows solved this problem a couple decades ago with redistributables.

          • rcxdude 10 hours ago

            GNU libc has pretty good backwards compatibility, though, so if not you want to run on a broad range of versions, link against as old a version of libc as is practical (which does take some effort, annoyingly). It tends to be things like GUI libraries and such which are a bigger PITA, because they do break compatibility and the old versions stop being shipped in distros, and shipping them all with your app can still run into protocol compatibility issues.

            • Retr0id 12 hours ago

              otoh staticly-linked executables are incredibly stable - it's nice to have that option.

              • delta_p_delta_x 12 hours ago

                From what I understand, statically linking in GNU's libc.a without releasing source code is a violation of LGPL. Which would break maybe 95% of companies out there running proprietary software on Linux.

                musl libc has a more permissive licence, but I hear it performs worse than GNU libc. One can hope for LLVM libc[1] so the entire toolchain would become Clang/LLVM, from the compiler driver to the C/C++ standard libraries. And then it'd be nice to whole-program-optimise from user code all the way to the libc implementation, rip through dead code, and collapse binary sizes.

                [1]: https://libc.llvm.org/

                • teraflop 11 hours ago

                  AFAIK, it's technically legal under the LGPL to statically link glibc as long as you also include a copy of the application's object code, along with instructions for how users can re-link against a different glibc if they wish. You don't need to include the source for those .o files.

                  But I don't think I've ever seen anybody actually do this.

                  • rcxdude 10 hours ago

                    Musl is probably the better choice for static linking anyway, GNU libc relies on dynamic linking for a few important features.

                    • resonious 10 hours ago

                      The Windows redistributables are so annoying as a user. I remember countless times applications used to ask me to visit the official Microsoft page for downloading them, and it was quite hard to find the right buttons to press to get the thing. Felt like offloading the burden to the users.

                      • IcyWindows 8 hours ago

                        Many installers do it right and don't require the user to do it themselves.

                      • dijit 6 hours ago

                        GNU LibC is notoriously difficult to statically link to anyway. (getaddrinfo for example).

                        Most people use musl, though some others use uclibc.

                        Musl is actually great, even if it comes with some performance drawbacks in a few cases.

                        • loeg 11 hours ago

                          You can (equivalently) distribute some specific libc.so with your application. I don't think anyone other than GNU maximalists believes this infects your application with the (L)GPL.

                          • Retr0id 5 hours ago

                            You'd need to distribute ld.so also, otherwise you'll run into ld/libc incompatibilities.

                      • o11c 8 hours ago

                        You're describing 2 completely different things there.

                        If your program is built to require myfavoritelibrary version 1.9, and you try to run it against myfavoritelibrary 1.0, no shit it doesn't work. Glibc is no different than any other in this regard.

                        If your program is built to require myfavoritelibrary version 1.0, and you try to run it on myfavoritelibrary 1.9 ... glibc's binary compatibility story has been very good since the release of 2.2 or so, way back in 2000. (I know from documentation that there were a lot of 2.0 -> 2.1 breakages, some of which might've actually been fixed in 2.1.x point releases, so I'm saying 2.2 to be safe)

                        It's not quite as perfect as Linux's "we do not break userland" but it's pretty darn close; I would have to hunt down changelogs to find something that actually broke without explicitly relying on "do not rely on this" APIs. Source compatibility is a different story, since deprecated APIs can be removed from the public headers but still present in the binary.

                        ... actually, even Linux has unapologetically broken its promise pretty badly in the past at various times. The 2.4 to 2.6 transition in particular was nasty. I'm also aware of at least one common syscall that broke in a very nasty way in some early versions; you can't just use ENOSYS to detect it but have to set up extra registers in a particular way to induce failure for incompatible versions (but only on some architectures; good luck with your testing!)

                        ---

                        There's nothing stopping you from installing and using the latest glibc and libgcc at runtime, though you'll have to work around your distro's happy path. Just be careful if you're building against them since you probably don't want to add extra dependencies for everything you build.

                        By contrast, I have statically-linked binaries from ~2006 that simply do not work anymore, because something in the filesystem has changed and their version of libc can't be fixed the way the dynamically-linked version has.

                        • throwaway2046 5 hours ago

                          You can find the history of API/ABI changes in glibc since 2011 in this table:

                          https://abi-laboratory.pro/?view=timeline&l=glibc

                          Granted it hasn't been updated since 2023, you can still see the trend with removed symbols in each version.

                          • ben-schaaf 4 hours ago

                            I looked into the changelog for 2.34, which this website claims removed 24 symbols.

                            * 9 malloc debugging variables were removed, though their symbols actually remain for backwards compatibility they just don't do anything. * vtimes was removed, but the symbol remains for backwards compatibility

                            Those were the only changelog entries listing removals. None of them cause linking issues. The 9 that did break backwards compatibility are a set of debug tools that don't work for alternate memory allocators and the functionality can be brought back with libc_malloc_debug.so.

                            Maybe the changelog's incomplete, but this actually seem pretty tame to me.

                      • chubot 13 hours ago

                        Yeah, famously there is no stable public driver API for Linux, which I believe was the motivation for Google’s Fuschia OS

                        So Linux is opinionated in both directions - towards user space and toward hardware - but in the opposite way

                      • pixl97 14 hours ago

                        While the author doesn't seem to like version based APIs very much, I always recommend baking them in from the very start of your application.

                        You cannot predict the future and chances are there will be some breaking change forced upon you by someone or something out of your control.

                        • paulhodge 12 hours ago

                          I have to agree with the author about not adding "v1" since it's rarely useful.

                          What actually happens as the API grows-

                          First, the team extends the existing endpoints as much as possible, adding new fields/options without breaking compatibility.

                          Then, once they need to have backwards-incompatible operations, it's more likely that they will also want to revisit the endpoint naming too, so they'll just create new endpoints with new names. (instead of naming anything "v2").

                          Then, if the entire API needs to be reworked, it's more likely that the team will just decide to deprecate the entire service/API, and then launch a new and better service with a different name to replace it.

                          So in the end, it's really rare that any endpoints ever have "/v2" in the name. I've been in the industry 25 years and only once have I seen a service that had a "/v2" to go with its "/v1".

                          • ks2048 11 hours ago

                            > So in the end, it's really rare that any endpoints ever have "/v2" in the name.

                            This is an interesting empirical question - take the 100 most used HTTP APIs and see what they do for backward-incompatible changes and see what versions are available. Maybe an LLM could figure this out.

                            I've been just using the Dropbox API and it is, sure enough, on "v2". (although they save you a character in the URL by prefixing "/2/").

                            Interesting to see some of the choices in v1->v2,

                            https://www.dropbox.com/developers/reference/migration-guide

                            They use a spec language they developed called stone (https://github.com/dropbox/stone).

                            • grodriguez100 7 hours ago

                              The author does not say that you “should not add v1”. They say that versioning is how you change your API responsibly (so, endorsing versioning), but that you should only do it as a last resort.

                              So you would add “v1”, to be able to easily bump to v2 later if needed, and do your best to avoid bumping to v2 if at all possible.

                            • andix 13 hours ago

                              I don't see any harm in adding versioning later. Let's say your api is /api/posts, then the next version is simply /api/v2/posts.

                              • choult 12 hours ago

                                It's a problem downstream. Integrators weren't forced to include a version number for v1, so the rework overhead to use v2 will be higher than if it was present in your scheme to begin.

                                • pixl97 11 hours ago

                                  This here, it's way easier to grep a file for /v1/ and show all the api endpoints then ensure you haven't missed something.

                                  • cmcconomy 7 hours ago

                                    grep for /* and omit /v2 ?

                              • gitremote 11 hours ago

                                I don't think the author meant they don't include /v1 in the endpoint in the beginning. The point is that you should do everything to avoid having a /v2, because you would have to maintain two versions for every bug fix, which means making the same code change in two places or having extra conditional logic multiplied against any existing or new conditional logic. The code bases that support multiple versions look like spaghetti code, and it usually means that /v1 was not designed with future compatibility in mind.

                                • JimDabell 6 hours ago

                                  > While the author doesn't seem to like version based APIs very much, I always recommend baking them in from the very start of your application.

                                  You don’t really need to do that for REST APIs. If clients request application/vnd.foobar then you can always add application/vnd.foo.bar;version=2 later without planning this in advance.

                                  • lazyasciiart 3 hours ago

                                    Most REST APIs don’t support that. So you don’t need versioning for APIs that already have a request type specified.

                                    • JimDabell 2 hours ago

                                      I’m not sure what you mean in the context of a discussion about how to design APIs. If you are the one designing an API, it’s up to you what you support.

                                    • 946789987649 5 hours ago

                                      If you use something like an OpenAPI generator and want to have different DTOs in your version 2, then you cannot do what you suggested.

                                      • JimDabell 3 hours ago

                                        You can specify multiple media types in OpenAPI.

                                    • grodriguez100 7 hours ago

                                      I would say the author recommends the same actually: they say that versioning is “how you change your API responsibly” (so, endorsing versioning), but that you should only switch to a new version as a last resort.

                                      • pbreit 10 hours ago

                                        Disagree. Baking versioning in from the start means they will much more likely be used, which is a bad thing.

                                        • claw-el 14 hours ago

                                          If there is a breaking change forced upon in the future, can’t we use a different name for the function?

                                          • soulofmischief 13 hours ago

                                            A versioned API allows for you to ensure a given version has one way to do things and not 5, 4 of which are no longer supported but can't be removed. You can drop old weight without messing up legacy systems.

                                            • Bjartr 13 hours ago

                                              See the many "Ex" variations of many functions in the Win32 API for examples of exactly that!

                                              • pixl97 11 hours ago

                                                Discoverability.

                                                /v1/downloadFile

                                                /v2/downloadFile

                                                Is much easier to check for a v3 then

                                                /api/downloadFile

                                                /api/downloadFileOver2gb

                                                /api/downloadSignedFile

                                                Etc. Etc.

                                                • claw-el 10 hours ago

                                                  Isn’t having the name (e.g. Over2gb) easier to understand than just saying v2? This is in the situation where there is breaking changes forced upon v1/downloadFile.

                                                  • echelon 10 hours ago

                                                    I have only twice seen a service ever make a /v2.

                                                    It's typically to declare bankruptcy on the entirety of /v1 and force eventual migration of everyone onto /v2 (if that's even possible).

                                                    • bigger_cheese 10 hours ago

                                                      A lot of the Unix/Linux Syscall api has a version 2+

                                                      For example dup(), dup2(), dup3() and pipe(), pipe2() etc

                                                      LWN has an article: https://lwn.net/Articles/585415/

                                                      It talks about avoiding this by designing future APIs using a flags bitmask to allow API to be extended in future.

                                                      • pixl97 10 hours ago

                                                        I work for a company that has an older api so it's defined in the header, but we're up to v6 at this point. Very useful for changes that have happened over the years.

                                                    • jahewson 12 hours ago

                                                      /api/postsFinalFinalV2Copy1-2025(1)ExtraFixed

                                                      • ks2048 11 hours ago

                                                        If you only break one or two functions, it seems ok. But, some change in a core data type could break everything, so adding a prefix "/v2/" would probably be cleaner.

                                                        • CharlesW 13 hours ago

                                                          You could, but it just radically increases complexity in comparison to "version" knob in a URI, media type, or header.

                                                      • achernik 11 hours ago

                                                        > How should you store the key? I’ve seen people store it in some durable, resource-specific way (e.g. as a column on the comments table), but I don’t think that’s strictly necessary. The easiest way is to put them in Redis or some similar key/value store (with the idempotency key as the key).

                                                        I'm not sure how would storing a key in Redis achieve idempotency in all failure cases. What's the algorithm? Imagine a server handling the request is doing a conditional write (like SET key 1 NX), and sees that the key is already stored. What then, skip creating a comment? Can't assume that the comment had been created before, since the process could have been killed in-between storing the key in Redis and actually creating the comment in the database.

                                                        An attempt to store idempotency key needs to be atomically committed (and rolled back in case it's unsuccessful) together with the operation payload, i.e. it always has to be a resource-specific id. For all intents and purposes, the idempotency key is the ID of the operation (request) being executed, be it "comment creation" or "comment update".

                                                        • rockwotj 10 hours ago

                                                          Yes please don’t add another component to introduce idempotency, it will likely have weird abstraction leaking behavior or just be plain broken if you don’t understand delivery guarantees. Much better to support some kind of label or metadata with writes so a user can track progress on their end and store it alongside their existing data.

                                                        • swagasaurus-rex 12 hours ago

                                                          Cursor based pagination was mentioned. It has another useful feature: If items have been added between when a user loads the page and hits the next button, index based pagination will give you some already viewed items from the previous page.

                                                          Cursor based pagination (using the ID of the last object on the previous page) will give you a new list of items that haven't been viewed. This is helpful for infinite scrolling.

                                                          The downside to cursor based pagination is that it's hard to build a jump to page N button.

                                                          • echelon 10 hours ago

                                                            You should make your cursors opaque so as to never reveal the size of your database.

                                                            You can do some other cool stuff if they're opaque - encode additional state within the cursor itself: search parameters, warm cache / routing topology, etc.

                                                        • jillesvangurp 6 hours ago

                                                          API versioning mostly just means things perpetually stuck at v1. You might have the intention to change things up, but you never will.

                                                          Putting version numbers in a URL is a bit of a kludge. v1 is the most common version, by far, you will ever see in a url. v2 is rare. v3 is more common strangely. I don't think I've seen a v4 or v5 or higher in the wild very often. That's just not a thing.

                                                          My theory is that v1 is the quick and dirty version that developers would like to forget exists. v2 is the "now we know what we're doing!" version and that's usually quickly followed by v3 because if you can change your mind once you can do it twice. After which people just tell developers to quit messing with the API already and keep things stable. v4 and v5 never happen.

                                                          Another observation is that semantic versioning for API urls here seems rare. Reason: it's inconvenient for clients to have to update all their URLs every time some developer changes their mind. Most clients will hard code the version. Because it never changes. And because it is hard coded, changing the version becomes inconvenient.

                                                          My attitude towards URL based versioning is that you could do it but it's not a tool that you get to use much. Therefore you can safely skip it and it won't be a problem. And in the worst case where you do need it, you can easily add a v2 URL space anyway. But you probably never will as you are unlikely to deprecated the entirety of your API.

                                                          There are other ways to deal with deprecating APIs. You can just add new paths or path prefixes in your API as needed. You can use a different domain. Or you can just remove them after some grace period. It depends. Versioning is more aspirational than actually a thing with APIs.

                                                          We do version our API but via client headers. Our API client sends a version header. And we check it server side and reject older versions with a version conflict response (409). This enables us to force users of our app to update to something we still support. The version number of our client library increments regularly. Anything falling behind too far we reject. This doesn't work for all use cases. But for a web app this is completely fine.

                                                        • weinzierl 3 hours ago

                                                          "Think about it - if you send three DELETE comments/32 requests in a row, it won’t delete three comments. The first successful request will delete the comment with ID 32, and the remaining requests will 404 when they can’t find the already-deleted comment."

                                                          Not necessarily. Many implementations return HTTP 204 for any DELETE that succeeds in the sense that the element is gone regardless if it had been there before. To me this always made much more sense than 404.

                                                          • runroader 14 hours ago

                                                            I think the only thing here that I don't agree with is that internal users are just users. Yes, they may be more technical - or likely other programmers, but they're busy too. Often they're building their own thing and don't have the time or ability to deal with your API churning.

                                                            If at all possible, take your time and dog-food your API before opening it up to others. Once it's opened, you're stuck and need to respect the "never break userspace" contract.

                                                            • Supermancho 10 hours ago

                                                              With internal users, you likely have instrumentation that allows you to contact and have those users migrate. You can actually sunset api versions, making API versioning an attractive solution. I've both participated in API versioning and observed it employed in organizations that don't use it by default as a matter of utility.

                                                              • devmor 14 hours ago

                                                                I think versioning still helps solve this problem.

                                                                There’s a lot of things you can do with internal users to prevent causing a burden though - often the most helpful one is just collaborating on the spec and making the working copy available to stakeholders. Even if it’s a living document, letting them have a frame of reference can be very helpful (as long as your office politics prevent them from causing issues for you over parts in progress they do not like.)

                                                              • barapa 10 hours ago

                                                                They suggest storing the idempotency key in redis. Seems like if possible, you should store them in whatever system you are writing to in a single transaction with the write mutations.

                                                                • 0xbadcafebee 12 hours ago

                                                                  Most people who see "API" today only think "it's a web app I send a request to, and I pass some arguments and set some headers, then check some settings from the returned headers, then parse some returned data."

                                                                  But "API" means "Application Programming Interface". It was originally for application programs, which were... programs with user interfaces! It comes from the 1940's originally, and wasn't referred to for much else until 1990. APIs have existed for over 80 years. Books and papers have been published on the subject that are older than many of the people reading this text right now.

                                                                  What might've those older APIs been like? What were they working with? What was their purpose? How did those programmers solve their problems? How might that be relevant to you?

                                                                  • spacechild1 40 minutes ago

                                                                    You are talking in past tense, but there are still many non-web APIs. Every software library has an API. I still find it incredibly annoying that the web folks have hijacked the term "API" as a short hand for "web API".

                                                                  • deathanatos 5 hours ago

                                                                    Pagination: do not force me to drink from a paginated coffee stir. I do not want 640 B of data in a response, and then have to send another response for the next 640 B. And often, pagination means the calls are serialized, so I'm just doing nothing but waiting for round trip latency after round trip latency for the next meager 640 B of data.

                                                                    Azure I'm looking at you. Many of their services do this, but Blob storage is something else: I've literally gotten information-free responses there. (I.e., 0 B of actual data. I wish I could say 0 B were used to transfer it.)

                                                                    When you're designing, think about how big a record/object/item is, and return a reasonable number of them in a page. For programmatic consumers who want to walk the dataset, a 640 KiB response is really not that big, and I've seen so many times responses orders of magnitude less, because someone thought "100 items is a good page size, right?" and 100 items was like 4 KiB of data.

                                                                    > If you have thirty API endpoints, every new version you add introduces thirty new endpoints to maintain. You will rapidly end up with hundreds of APIs that all need testing, debugging, and customer support.

                                                                    You version the one thing that's changing.

                                                                    As much as I hate the /v2/... form of versioning, nobody reversions all the /v1/... APIs just because one API needed a /v2. /v2 is ghost town, save for the /v2 APIs.

                                                                    • atoav 5 hours ago

                                                                      Yeah, pagination ia a great option — maybe even a good default. But don't make it the only choice, give developers the choice to make the tradeoff between number of requests and payload size.

                                                                    • frabonacci 14 hours ago

                                                                      The reminder to "never break userspace" is gold and often overlooked.. ahem Spotify, Reddit and Twitter come to mind.

                                                                      • minzak 4 hours ago

                                                                        Nice article. I'd also highly recommend "How To Design A Good API and Why it Matters" video from Joshua Bloch (co-author of Java). He digs much deeper, mostly at code level rather than JSON level but the same principles apply. Even though this video is almost 20 years old, the topics are as significant today as they were back then. https://www.youtube.com/watch?v=aAb7hSCtvGw

                                                                        • BrainyZeiny 30 minutes ago

                                                                          In 2025, leaving GraphQL out of a discussion on modern API design is like writing about web frameworks and downplaying React. It may not be right for every use case.. but it has become foundational in many serious frontend/backend architectures, and that deserves acknowledgment.

                                                                          GraphQL isn’t just another protocol. It’s a paradigm shift in how we think about designing and consuming APIs. The author downplays its role, but in practice, GraphQL enables cleaner contracts between frontend and backend, encourages typed schemas, and dramatically reduces over-fetching and under-fetching. That’s not a minor point .. that’s central to what most developers care about when consuming APIs.

                                                                          Regarding caching: yes, REST has traditional browser and CDN-based caching, but GraphQL is absolutely cacheable too. Tools like Apollo Client and Relay have built-in normalized caches that are far more granular and powerful than most REST setups. At the infrastructure level, persisted queries and CDN layer solutions (like GraphCDN or Stellate) further optimize caching behavior. So the claim that “you can’t cache GraphQL” is outdated at best.

                                                                          • nevertoolate 13 minutes ago

                                                                            Caching is straw-man. Complexity tends to be higher to implement a graphql backend than for e.g. type-spec => openapi backend. For the simple case the simpler solution wins imo. For the complex case it can be a toss-up, but it seems that performance and security can be a big deal-breaker for graphql, so maybe only big enterprises can afford to go down that route.

                                                                            I'm not sure what you mean by the react analogy, react seems to be far more popular than graphql in their respective areas.

                                                                          • JimDabell 7 hours ago

                                                                            This is great. One thing I would add:

                                                                            The quality of the API is inversely correlated to how difficult it is to obtain API documentation. If you are only going to get the API documentation after signing a contract, just assume it’s dismally bad.

                                                                            • claw-el 14 hours ago

                                                                              > However, a technically-poor product can make it nearly impossible to build an elegant API. That’s because API design usually tracks the “basic resources” of a product (for instance, Jira’s resources would be issues, projects, users and so on). When those resources are set up awkwardly, that makes the API awkward as well.

                                                                              One issue I have with weird resources are those that feel like unnecessary abstraction. It makes it hard for the human to read and understand intuitively, especially someone new to these set of APIs. Also, it makes it so much harder to troubleshoot during an incident.

                                                                              • mrkeen 6 hours ago

                                                                                > That way you can send as many retries as you like, as long as they’ve all got the same idempotency key - the operation will only be performed once.

                                                                                I worked in an org where idempotency meant: if it threw an exception this time, it needs to throw the same exception everytime.

                                                                                • cjblomqvist 6 hours ago

                                                                                  Pragmatism rules here, but yeah - the common way to do this (at least if you have keys generatable by the client), eg. using REST, is to not allow POSTs, but only PUT. Most APIs I've seen use PUT solely for updates (of existing items), but as is obvious from the wording it's not the original intention.

                                                                                • canpan 10 hours ago

                                                                                  > many of your users will not be professional engineers. They may be salespeople, product managers, students, hobbyists, and so on.

                                                                                  This is not just true for authentication. If you work in a business setting, your APIs will be used by the most random set of users. They be able to google for how to call your api in python, but not be able to do things like converting UTC to their local time zone.

                                                                                  • xtacy 14 hours ago

                                                                                    Are there good public examples of well designed APIs that have stood the test of time?

                                                                                    • binaryturtle 14 hours ago

                                                                                      I always thought the Amiga APIs with the tag lists were cool. You easily could extend the API/ABI w/o breaking anything at the binary level (assuming you made the calls accept tag lists as parameters to begin with, of course).

                                                                                    • miki123211 4 hours ago

                                                                                      > You should let people use your APIs with a long-lived API key

                                                                                      This is an extremely unpopular opinion, but I would go even further. I think you should let people use your API with just an username and a password.

                                                                                      It should by no means be the only way people can use your API. Put very low users-per-IP rate limits on that approach if you want to, to force lazy but professional software developers to go the oAuth route before their app gets to production. For one-off scripts though, APIs that let you do this are a breath of fresh air.

                                                                                      If your API is based on API keys, you will be tempted to do things that really annoy new users of that API. People don't want to tell you what their app name is, they don't know that yet. They're certainly not picking a purpose they need this API for from a list of five, not if it doesn't include "completing a classroom assignment I don't really care about and want to finish as quickly as possible." They for sure don't yet know what scopes they might possibly need, even if to you, their names are descriptive and obvious. If you allow user-password authentication, you take away the ability to shoot yourself in the foot in this way.

                                                                                      • tiffanyh 8 hours ago

                                                                                        Here’s also some good recommendations: https://jcs.org/2023/07/12/api

                                                                                      • zahlman 13 hours ago

                                                                                        Anyone else old enough to remember when "API" also meant something that had nothing to do with sending and receiving JSON over HTTP? In some cases, you could even make something that your users would install locally, and use without needing an Internet connection.

                                                                                        • drdaeman 13 hours ago

                                                                                          I believe it’s pretty common to e.g. call libraries’ and frameworks’ user- (developer-) facing interface an API, like in “Python’s logging library has a weird-looking API”, so I don’t think API had eroded to mean only networked ones.

                                                                                          • mettamage 12 hours ago

                                                                                            I never understood why libraries also had the word API. From my understanding a library is a set of functions specific to a certain domain, such as a statistics library, for example. Then why would you need the word API? You already know it’s a library.

                                                                                            For end points it’s a bit different. You don’t know what are they or user facing or programmer facing.

                                                                                            I wonder if someone has a good take on this. I’m curious to learn.

                                                                                            • dfee 12 hours ago

                                                                                              To use code, you need an interface. One for programming. Specifically to build an application.

                                                                                              Why does the type of I/O boundary matter?

                                                                                              • mettamage 4 hours ago

                                                                                                Wouldn't the interface in C simply be called function headers? Why are we using the term API? It seems a word like "function signatures" would also make it clear (or just signatures or headers).

                                                                                                Maybe I just don't understand what the word interface means other than the GUI version of it. What's an interface in the analogue world? [2]

                                                                                                By the way, one person downvoted me. To that person: it's fine that you downvoted me, but also let's try to keep an open and inclusive culture?

                                                                                                I know it's a beginner question. I'm not a beginner, I use APIs all the time and have designed them as well. Just how I used servers without knowing for 5 years the semantic meaning behind it [1]. Understanding things deeply in that way is not my forte.

                                                                                                [1] Though most people still don't know that clients/servers are roles of computer programs. Many programmers conflate servers with actual hardware in the sense of "a computer can be a client of a server". Well, no a piece of code can be a client to another piece of code and a server can be a piece of code to another piece of code. They're roles, not distinct hardware.

                                                                                                [2] Claude mentions:

                                                                                                Analog World Interfaces

                                                                                                Door handle - The interface to a door mechanism. Whether it's a simple latch or complex electronic lock, you just turn/push the handle. An interface is basically the part you touch/use without needing to understand what's behind it.

                                                                                                • zahlman 41 minutes ago

                                                                                                  > Wouldn't the interface in C simply be called function headers? Why are we using the term API? It seems a word like "function signatures" would also make it clear (or just signatures or headers).

                                                                                                  First, to contrast the "application programmer interface" (i.e. what code has to be written to use it properly in an environment with a compiler) from the "application binary interface" (i.e. what actual bytes have to be provided in order to work with an already compiled version — important for linking, inter-process communication etc.).

                                                                                                  Second, to be able to talk about what you actually do with the headers (i.e. what the rules are for using the code) separately from the headers themselves (i.e. just the code itself), and to abstract over other programming languages that don't work the same way.

                                                                                                  > They're roles, not distinct hardware.

                                                                                                  So, you already understand the value of these kinds of distinctions. A shame we haven't historically made them more consistently. (As an exercise, try to list everything that "static" can mean in every programming language you know where that's a keyword. For bonus points, contrast and compare to how the word is used in the discussion of programming language design.)

                                                                                                  > Maybe I just don't understand what the word interface means other than the GUI version of it. What's an interface in the analogue world? ... Claude mentions:

                                                                                                  In the world that I was recalling, when people were unfamiliar with a word, they used a resource called a "dictionary" to look them up. This provided a pre-written answer directly, rather than relying on sophisticated computer models to come up with something new every time. Admittedly, this did trend towards online use over time, in particular since this made it easier to update these resources to reflect current use patterns. But those online resources are still available, e.g. https://www.merriam-webster.com/dictionary/interface .

                                                                                                  Even with AI expanding so far as to creep into search engines, you can still reliably obtain such definitions with search queries consisting of "define" + the word.

                                                                                              • snmx999 8 hours ago

                                                                                                The API of a library is what a recipe is to food.

                                                                                                • shortrounddev2 12 hours ago

                                                                                                  To me the API is the function prototypes. The DLL is the library

                                                                                              • chubot 13 hours ago

                                                                                                Well it stands for “application programming interface”, so I think it is valid to apply it to in-process interfaces as well as between-process interfaces

                                                                                                Some applications live in a single process, while others span processes and machines. There are clear differences, but also enough in common to speak of “APIs” for both

                                                                                                • zahlman 35 minutes ago

                                                                                                  It certainly is valid. I'm just irritated that it's taken over to the extent that it has.

                                                                                                  Just as I am irritated that people seem to have forgotten that "applications" can potentially run completely locally, and that programs can be designed around the assumption that the "front end" (UI) and "back end" (business logic) will communicate directly (or at least exist within the same process, even if there are good reasons to set up message queues etc.).

                                                                                                  But, you know, that's bad for business. Because that entails that the consumer might actually get to own something, even if it's just an ephemeral pattern of bits on local storage.

                                                                                                • gct 12 hours ago

                                                                                                  Everyone's decided that writing regular software to run locally on a computer is the weird case and so it has to be called "local first".

                                                                                                  • rogerthis 13 hours ago

                                                                                                    Things would come in SDKs, and docs were in MS Help .chm files.

                                                                                                    • ivanjermakov 11 hours ago

                                                                                                      > sending and receiving JSON over HTTP

                                                                                                      In my circles this is usually (perhaps incorrectly) called REST API.

                                                                                                      • j45 13 hours ago

                                                                                                        APIs are for providing accessibility - to provide access to interactions and data inside an application from the outside.

                                                                                                        The format and protocol of communication was never fixed.

                                                                                                        In addition to the rest api’s of today, soap, wsdl, web sockets could all can deliver some form of API.

                                                                                                        • bigiain 10 hours ago

                                                                                                          CORBA

                                                                                                          Shudder...

                                                                                                      • wener 12 hours ago

                                                                                                        I still try think /v1 /v2 is a break, I don't trust you will keep v1 forever, otherwise you'll never introduce this execuse.

                                                                                                        I'd like to introduce more fields or flags to control the behavior as params, not asking user to change the whole base url for single new API.

                                                                                                        • calrain 12 hours ago

                                                                                                          I like this pattern.

                                                                                                          When an API commits to /v1 it doesn't mean it will deprecate /v1 when /v2 or /v3 come out, it just means we're committing to supporting older URI strategies and responses.

                                                                                                          /v2 and /v3 give you that flexibility to improve without affecting existing customers.

                                                                                                        • mlhpdx 12 hours ago

                                                                                                          Having built a bunch of low level network APIs I think the author hits on some good, common themes.

                                                                                                          Versioning, etc. matter (or don’t) for binary UDP APIs (aka protocols) just as much as for any web API.

                                                                                                          • cyberax 14 hours ago

                                                                                                            I'm a bit of a different opinion on API versioning, but I can see the argument. I definitely disagree about idempotency: it's NOT optional. You don't have to require idempotency tokens for each request, but there should be an option to specify them. Stripe API clients are a good example here, they automatically generate idempotency tokens for you.

                                                                                                            Things that's missing from this list but that were important for me at some points:

                                                                                                            1. Deadlines. Your API should allow to specify the deadline after which the request is no longer going to matter. The API implementation can use this deadline to cancel any pending operations.

                                                                                                            2. Closely related: backpressure and dependent services. Your API should be designed to not overload its own dependent services with useless retries. Some retries might be useful, but in general the API should quickly propagate the error status back to the callers.

                                                                                                            3. Static stability. The system behind the API should be designed to fail static, so that it retains some functionality even if the mutating operations fail.

                                                                                                            • deterministic 6 hours ago

                                                                                                              It’s rare to read an article where I agree 100% with everything written.

                                                                                                              Bravo!

                                                                                                              • cyberax 15 hours ago

                                                                                                                > You should let people use your APIs with a long-lived API key.

                                                                                                                Sigh... I wish this were not true. It's a shame that no alternatives have emerged so far.

                                                                                                                • TrueDuality 14 hours ago

                                                                                                                  There are other options that allow long-lived access with naturally rotating keys without OAuth and only a tiny amount of complexity increase that can be managed by a bash script. The refresh token/bearer token combo is pretty powerful and has MUCH stronger security properties than a bare API key.

                                                                                                                  • maxwellg 13 hours ago

                                                                                                                    Refresh tokens are only really required if a client is accessing an API on behalf of a user. The refresh token tracks the specific user grant, and there needs to be one refresh token per user of the client.

                                                                                                                    If a client is accessing an API on behalf of itself (which is a more natural fit for an API Key replacement) then we can use client_credentials with either client secret authentication or JWT bearer authentication instead.

                                                                                                                    • TrueDuality 11 hours ago

                                                                                                                      That is a very specific form of refresh token but not the only model. You can just easily have your "API key" be that refresh token. You submit it to an authentication endpoint, get back a new refresh token and a bearer token, and invalidate the previous bearer token if it was still valid. The bearer token will naturally expire and if you're still using it, just use the refresh immediately, if its days or weeks later you can use it then.

                                                                                                                      There doesn't need to be any OIDC or third party involved to get all the benefits of them. The keys can't be used by multiple simultaneous clients, they naturally expire and rotate over time, and you can easily audit their use (primarily due to the last two principles).

                                                                                                                    • rahkiin 14 hours ago

                                                                                                                      If api keys do not need to ve stateless, every api key can become a refresh token with a full permission and validity lookup.

                                                                                                                      • marcosdumay 10 hours ago

                                                                                                                        This.

                                                                                                                        The separation of a refresh cycle is an optimization done for scale. You don't need to do it if you don't need the scale. (And you need a really huge scale to hit that need.)

                                                                                                                      • 0x1ceb00da 10 hours ago

                                                                                                                        > The refresh token/bearer token combo is pretty powerful and has MUCH stronger security properties than a bare API key

                                                                                                                        I never understood why.

                                                                                                                        • TrueDuality 9 hours ago

                                                                                                                          The quick rundown of refresh token I'm referring to is:

                                                                                                                          1. Generate your initial refresh token for the user just like you would a random API key. You really don't need to use a JWT, but you could.

                                                                                                                          2. The client sends the refresh token to an authentication endpoint. This endpoint validates the token, expires the refresh token and any prior bearer tokens issued to it. The client gets back a new refresh token and a bearer token with an expiration window (lets call it five minutes).

                                                                                                                          3. The client uses the bearer token for all requests to your API until it expires

                                                                                                                          4. If the client wants to continue using the API, go back to step 2.

                                                                                                                          The benefits of that minimal version:

                                                                                                                          Client restriction and user behavior steering. With the bearer tokens expiring quickly, and refresh tokens being one-time use it is infeasible to share a single credential between multiple clients. With easy provisioning, this will get users to generate one credential per client.

                                                                                                                          Breach containment and blast radius reduction. If your bearer tokens leak (logs being a surprisingly high source for these), they automatically expire when left in backups or deep in the objects of your git repo. If a bearer token is compromised, it's only valid for your expiration window. If a refresh token is compromised and used, the legitimate client will be knocked offline increasing the likelihood of detection. This property also allows you to know if a leaked refresh token was used at all before it was revoked.

                                                                                                                          Audit and monitoring opportunities. Every refresh creates a logging checkpoint where you can track usage patterns, detect anomalies, and enforce policy changes. This gives you natural rate limiting and abuse detection points.

                                                                                                                          Most security frameworks (SOC 2, ISO 27001, etc.) prefer time-limited credentials as a basic security control.

                                                                                                                          Add an expiration time to refresh tokens to naturally clean up access from broken or no longer used clients. Example: Daily backup script. Refresh token's expiration window is 90 days. The backups would have to not run for 90 days before the token was an issue. If it was still needed the effort is low, just provision a new API key. After 90 days of failure you either already needed to perform maintenance on your backup system or you moved to something else without revoking the access keys.

                                                                                                                          • 0x1ceb00da 8 hours ago

                                                                                                                            So a refresh token on its own isn't more secure than a simple api key. You need a lot of plumbing and abuse detection analytics around it as well.

                                                                                                                            • TrueDuality an hour ago

                                                                                                                              Almost every one of those benefits _doesn't_ require anything else. You need one more API endpoint to exchange refresh tokens for bearer token (over a simple static API key) and you get those benefits.

                                                                                                                      • pixelatedindex 14 hours ago

                                                                                                                        To add on, are they talking about access tokens or refresh tokens? It can’t be just one token, because then when it expires you have to update it manually from a portal or go through the same auth process, neither of which is good.

                                                                                                                        And what time frame is “long-lived”? IME access tokens almost always have a lifetime of one week and refresh tokens anywhere from 6 months to a year.

                                                                                                                        • smj-edison 13 hours ago

                                                                                                                          > Every integration with your API begins life as a simple script, and using an API key is the easiest way to get a simple script working. You want to make it as easy as possible for engineers to get started.

                                                                                                                          > ...You’re building it for a very wide cross-section of people, many of whom are not comfortable writing or reading code. If your API requires users to do anything difficult - like performing an OAuth handshake - many of those users will struggle.

                                                                                                                          Sounds like they're talking about onboarding specifically. I actually really like this idea, because I've certainly had my fair share of difficulty just trying to get the dang thing to work.

                                                                                                                          Security wise perhaps not the best, but mitigations like staging only or rate limiting seem sufficient to me.

                                                                                                                          • pixelatedindex 12 hours ago

                                                                                                                            True, I have enjoyed using integrations where you can generate a token from the portal for your app to make the requests. One thing that’s difficult in this scenario is authorization - what resources does this token have access to can be kind of murky.

                                                                                                                          • rahkiin 14 hours ago

                                                                                                                            Inthink they are talking about refresh token or Api Keys like PATs. Some value you pass in a header and it just works. No token flow. And the key is valid for months and can be revoked

                                                                                                                            • cyberax 13 hours ago

                                                                                                                              If you're using APIs from third parties, the most typical authentication method is a static key that you stick in the "Authorization" HTTP header.

                                                                                                                              OAuth flows are not at all common for server-to-server communications.

                                                                                                                              In my perfect world, I would replace API keys with certificates and use mutual TLS for authentication.

                                                                                                                              • zzo38computer 5 hours ago

                                                                                                                                I would also use mutual TLS; in addition to improved security (on both sides), it also allows for many additional possibilities, such as partial delegation of authorization, etc. (If not everyone wants it, it can be made an option rather than mandatory.)

                                                                                                                                • pixelatedindex 12 hours ago

                                                                                                                                  IME, OAuth flows are pretty common in S2S communication. Usually these tend to be client credential based flows where you request a token exactly like you said (static key in Authorization), rather than authorized grant flows which requires a login action.

                                                                                                                                  • cyberax 11 hours ago

                                                                                                                                    Yeah, but then there's not that much difference, is there? You can technically move the generation of the access tokens to a separate secure environment, but this drastically increases the complexity and introduces a lot of interesting failure scenarios.

                                                                                                                                    • pixelatedindex 10 hours ago

                                                                                                                                      I mean… is adding an OAuth layer in 2025 adding that much complexity? If you’re scripting then there’s usually some package native to the language, if you’re using postman you’ll need to generate your authn URL (or do username/passwords for client ID/secret).

                                                                                                                                      If you have sensitive resources they’ll be blocked behind some authz anyway. An exception I’ve seen is access to a sandbox env, those are easily generated at the press of a button.

                                                                                                                                      • cyberax 10 hours ago

                                                                                                                                        No, I'm just saying that an OAuth layer isn't really adding much benefit when you either use an API key to obtain the refresh token or the refresh token itself becomes a long-term secret, not much better than an API key.

                                                                                                                                        Some way to break out of the "shared secret" model is needed. Mutual TLS is one way that is at least getting some traction.

                                                                                                                                  • nostrebored 13 hours ago

                                                                                                                                    In your perfect world, are you primarily the producer or consumer of the API?

                                                                                                                                    I hate mTLS APIs because they often mean I need to change how my services are bundled and deployed. But to your point, if everything were mTLS I wouldn’t care.

                                                                                                                                    • cyberax 10 hours ago

                                                                                                                                      > In your perfect world, are you primarily the producer or consumer of the API?

                                                                                                                                      Both, really. mTLS deployment is the sticking point, but it's slowly getting better. AWS load balancers now support it, they terminate the TLS connection, validate the certificate, and stick it into an HTTP header. Google Cloud Platform and CloudFlare also support it.