• Arnavion 3 hours ago

    Distributions optimize for software in their repos. Software in their repos is compiled against libraries in their repos, so dynamic linking has no downsides and has the upside of reducing disk usage and runtime memory usage (sharing pages).

    Your problem with "libFlac.8.so is missing" happens with using software not from the distro repos. Feel free to statically link it, or run it via AppImage or Flatpak or Podman or whatever you want that provides the environment it *was* compiled for. Whether the rest of the distro is dynamically linked or not makes no difference to your ability to run this software, so there's no reason to make the rest of the distro worse.

    I personally do care about disk usage and memory usage. I also care about using software from distro repos vs Flatpak etc wherever possible, because software in the distro repos is maintained by someone whose values align with me and not the upstream software author. Eg the firefox package from distro repos enables me to load my own extensions without Mozilla's gatekeeping, the Audacity package from distro repos did not have telemetry enabled that Audacity devs added to their own builds, etc.

    • cbmuser 2 hours ago

      The main argument for using shared libraries isn’t memory or disk usage, but simply security.

      If you have a thousand packages linking statically against zlib, you will have to update a thousand packages in case of a vulnerability.

      With a shared zlib, you will have to update only one package.

      • pizlonator 7 minutes ago

        That's also a good argument for shared libraries, but the memory usage argument is a big one. I have 583 processes running on my Linux box right now (I'm posting from FF running on GNOME on Pop!_OS), and it's nice that when they run the same code (like zlib but also lots of things that are bigger than zlib), they load the same shared library, which means they are using the same mapped file and so the physical memory is shared. It means that memory usage due to code scales with the amount of code (linear), not with the amount of processes times the amount of code (quadratic).

        I think that having a sophisticated desktop OS where every process had distinct physical memory for duplicated uses of the same code would be problematic at scale, especially on systems with less RAM. At least that physical memory would still be disk-backed, but that only goes so far.

        That said, the security argument is also a good argument!

        • Arnavion 2 hours ago

          That's not a difference of security, only of download size (*). A statically-linking distro would queue up those thousand packages to be rebuilt and you would receive them via OS update, just as you would with a dynamically-linking distro. The difference is just in whether you have to download 1000 updated packages or 1.

          And on a distribution like OpenSUSE TW, the automation is set up such that those thousand packages do get rebuilt anyway, even though they're dynamically linked.

          (*): ... and build time on the distro package builders, of course.

          • singron 15 minutes ago

            The build time is pretty significant actually. E.g. NixOS takes this to the extreme and always rebuilds packages if their dependencies change. It can take days to rebuild affected packages, and you typically can't easily update in the meantime. This is worse if the rebuild breaks something since someone will have to fix or revert that before it can succeed.

            For non-security updates, they can do the rebuild in a branch (staging) so it's non-blocking, but you will feel the pain on a security update.

            In practice, users will apply a config mitigation if available or "graft" executables against an updated lib using system.replaceDependencies, which basically search and replaced the built artifacts to use a different file path.

            • Arnavion 10 minutes ago

              True. I'm involved in Alpine, postmarketOS and OpenSUSE packaging and I've seen builders of the first two become noticeably slow on mass rebuilds. OpenSUSE tends to be fine, but it has a lot of builders so that's probably why.

          • imoverclocked 2 hours ago

            > The main argument for using shared libraries isn’t memory or disk usage, but simply security.

            “The” main argument? In a world filled with diverse concerns, there isn’t just one argument that makes a decision. Additionally, security is one of those things where practically everything is a trade off. Eg: by having lots of things link against a single shared library, that library becomes a juicy target.

            > With a shared zlib, you will have to update only one package.

            We are back to efficiency :)

            • sunshowers 2 hours ago

              The solution here is to build tooling to track dependencies in statically linked binaries. There is no inherent reason that has to be tightly coupled to the dynamic dispatch model of shared objects. (In other words, the current situation is not an inherent fact about packaging. Rather, it is path-dependent.)

              For instance, many modern languages use techniques that are simply incompatible with dynamic dispatch. Some languages like Swift have focused on dynamic dispatch, but mostly because it was a fundamental requirement placed on their development teams by executives.

              While there is a place for dynamic dispatch in software, there is also no inherent justification for dynamic dispatch boundaries to be exactly at organizational ones. (For example, there is no inherent justification for the dynamic dispatch boundary to be exactly at the places a binary calls into zlib.)

              edit: I guess loading up a .so is more commonly called "dynamic binding". But it is fundamentally dynamic dispatch, ie figuring out what version of a function to call at runtime.

              • nmz an hour ago

                Does static compilation use the entire library instead of just the parts that are used? If I'm just using a single function from this library, why include everything?

                • t-3 2 hours ago

                  If a vulnerability in a single library can cause security issues in more than one package, there are much more serious issues to consider with regards to that library than the need to recompile 1000 dependents. The monetary/energy/time savings of being able to update libraries without having to rebuild dependents are of far greater significance than the theoretical improvement in security.

                • skissane an hour ago

                  > I also care about using software from distro repos vs Flatpak etc wherever possible, because software in the distro repos is maintained by someone whose values align with me and not the upstream software author.

                  The problem one usually finds with distro repo packages, is they are usually out of date compared to the upstream – especially if you are running a stable distro release as opposed to the latest bleeding edge. You can get in a situation where you are forced to upload your whole distro to some unstable version which may introduces lots of other issues, just because you need a newer version of some specific package. Upstream binary distributions, Flatpak/etc, generally don't have that issue.

                  > the firefox package from distro repos enables me to load my own extensions without Mozilla's gatekeeping, the Audacity package from distro repos did not have telemetry enabled that Audacity devs added to their own builds, etc

                  This is mainly a problem with "commercial open source", where an open source package is simultaneously a commercial product. "Community open source" – where the package is developed in people's spare time as a hobby, or even by commercial developers where the package is just some piece of platform infrastructure not a product in itself, is much less likely to have this kind of problem.

                  • anotherhue 3 hours ago

                    If you're in an adversarial relationship with the OEM software developer there's not a whole lot the distro maintainers can do, probably time to find a fork/alternative. (Forks exist for both your examples).

                    I say this as a casual maintainer of several apps and I'm loathe to manually patch versus upstream any fix.

                    • Arnavion 2 hours ago

                      I'm not going to switch to a firefox fork over one line in the configure script invocation. Forks have their own problems with maintenance and security. It's not useful to boil it down to one "adversarial relationship" boolean.

                    • jauntywundrkind an hour ago

                      With Debian, one can apt-pin different releases on as well. So you can run testing for example but have oldstable, stable, unstable and experimental all pinned on.

                      That maximizes your chance of being able to satisfy a particular dependency like libflac.8.so. Sometimes that might not actually be practical to pull in or might involve massively changing a lot of your installed software to satisfy the dependencies, but often it can be a quick easy way to drop in more libraries.

                      Sometimes libraries don't have a version number on them, so it'll keep being libflac even across major versions. Thats prohibitive because ideally you want to install old version 8 alongside newer version 12. But generally Debian is pretty good about allowing multiple major versions of packages. Here for example is libflac12, on stable and unstable both. https://packages.debian.org/search?keywords=libflac12

                      • red016 2 hours ago

                        Install Gentoo.

                      • odo1242 3 hours ago

                        Something worth noting with shared dependencies is that yes, they save on disk space, but they also save on memory. A 200MB non-shared dependency will take up 600MB across three apps, but a 200MB shared dependency can be loaded on it's own and save 400 megabytes. (Most operating systems manage this at the physical page level, by mapping multiple instances of a shared library to the same physical memory pages.)

                        400 megabytes of memory usage is probably worth more than 400 megabytes of storage. It may not be a make-or break thing on it's own, but it's one of the reasons Linux can run on lower-end devices.

                        • packetlost 3 hours ago

                          When you statically compile an application, you only store text (code) of functions you actually use, generally, so unless you're using all 400Mb of code you're not going to have a 400Mb+ binary. I don't think I've ever seen a dependency that was 400Mb of compiled code if you stripped debug information and weren't embedding graphical assets, so I'm not sure how relevant this is in the first place. 400Mb of opcodes is... a lot.

                          • AshamedCaptain 3 hours ago

                            I have seen a binary that is approximately 1.1GB of text. That is without debug symbols. With debug symbols it would hit GDB address space overflow bugs all the time. You have not seen what 30 year old engineering software houses can produce. And this is not the largest software by any chance.

                            Also, sibling comments argue that kernel samepage merging can help avoid the bloat of static linking. But here what you argue will make every copy of the shared libraries oh-so-slightly-different and therefore prevent KSM from working at all. Really, no one is thinking this through very well. Even distributions that do static linking in all but name (such as NixOS) do still technically use dynamic linking for the disk space and memory savings.

                            • sunshowers an hour ago

                              Some programs are definitely that complicated, and it's a horrible idea for them to use dynamic binding! The test matrix is absurdly large. (You are testing the software you ship as you ship it, aren't you?)

                              • AshamedCaptain an hour ago

                                By the same logic, you also would have to test the software with every display resolution / terminal width in existance. You are testing the software you ship as you ship it, aren't you?

                                Abstractions are a thing in computer science. Abstracting at the shared library layer makes as much sense as abstracting in the RPC layer (which your software is most likely going to be obligated to do) or abstracting at the ISA level (which your software IS obligated to do). Your software has as many chances to break from a library change as it does from a display driver change or from a screen resolution change or from a processor upgrade. Why the first would bloat the "testing matrix" but not the later is over me, and already shows a bias against dynamic linking: you assume library developers are incapable of keeping an ABI but that the CPU designers are. (Anecdotally, as a CPU designer, I would rather trust the library developers..)

                                • sunshowers an hour ago

                                  In practice, I've seen breakage from shared library updates be much more common than breakage from display resolutions.

                                  Many modern software development paradigms are simply not compatible with ABIs or dynamic binding. Dynamic binding also likely means you're leaving a bunch of performance on the table, since inlining across libraries isn't an option.

                                  • AshamedCaptain an hour ago

                                    > In practice, I've seen breakage from shared library updates be much more common than breakage from display resolutions.

                                    You'd be surprised, specially when I'm thinking 30 year old software. Again, usually I can patch around it thanks to dynamic linking...

                                    > Many modern software development paradigms are simply not compatible with ABIs or dynamic binding

                                    This is nonsense.

                                    > Dynamic binding also likely means you're leaving a bunch of performance on the table, since inlining across libraries isn't an option.

                                    Again, why set the goalpost here and not say ISA level or any other abstraction layer? I could literally make the same argument to any of these levels (e.g. "you are leaving a bunch of performance on the table" by not specializing your ISA to your software). How much are you really leaving? And how much would you pay if you remove the abstraction? What are the actual pros/cons?

                                    • sunshowers an hour ago

                                      True! That is a completely valid argument. Why not ship your own kernel? Your own processors with your own ISA?

                                      The answer to each of these questions is specific to the circumstances. You have to decide based on general principles (what do you value?), the specific facts, and ultimately judgment.

                                      I think in some cases (e.g kernel or libc) using dynamic binding generally makes sense, but I happen to think forcing shared library use has many more costs than benefits.

                                      You're absolutely right that everyone should ask these questions, though. I work at Oxide where we did ask these questions, and decided that to provide a high-quality cloud-like experience we need much tighter coupling between our components than is generally available to the public. So. for example, we don't use a BIOS or UEFI—we have our own firmware that is geared towards loading exactly the OS we ship.

                                      > This is nonsense.

                                      Monomorphization like in C++ or Rust doesn't work with dynamic binding. C macros and header-only libraries don't work with dynamic binding either.

                              • packetlost 3 hours ago

                                Your poor I$.

                                The fact of the matter is outside of GPU drivers and poorly designed GUI applications, that's an extreme statistical outlier.

                                • AshamedCaptain an hour ago

                                  Sum the GPU driver and the mandatory LLVM requirement that comes with it, and you already have almost half a gigabyte of text per exec. No wonder that your logic forces you to literally dismiss all GUI applications as "extreme statistical outliers" out of hand.

                                  For the record, the 1.1GB executable I'm thinking about is a popular _terminal-only_ simulation package. No GUI.

                                  • nightowl_games 19 minutes ago

                                    Can't you just tell us what it is?

                                  • lostmsu 3 hours ago

                                    You mean GPU libraries, right?

                                    Just watch how these become used in every other app over the next few years.

                                    And yes, the fact that you won't use those apps does not make it easier for the rest of us.

                                • vvanders 3 hours ago

                                  Yes, this is lost in most discussions when it comes to DSOs. Not only do you have the complexity of versioning and vending but also you can't optimize with LTO and other techniques(which can make a significant difference in final binary size).

                                  If you've got 10-15+ consumers of a shared library or want to do plugins/hot-code reloading and have a solid versioning story by all means vend a DSO. If you don't however I would strongly recommend trying to keep all dependencies static and letting LTO/LTCG do its thing.

                                • tdtd an hour ago

                                  This is certainly true on Windows, where loaded DLLs share the base addresses across processes even with ASLR enabled, but is it the case on Linux, where ASLR forces randomization of .so base addresses per process, so relocations will make the data in their pages distinct? Or is it the case that on modern architectures with IP-relative addressing (like x64) that relocations are so uncommon that most library pages contain none?

                                  • saagarjha 18 minutes ago

                                    Code pages are intentionally designed to stay clean; relocations are applied elsewhere in pages that are different for each process.

                                  • Lerc 3 hours ago

                                    In practice I don't think this results in memory savings. By having a shared library and shared memory use, you also have distributed the blame for the size of the application.

                                    It would be true that this save memory if applications did not increase their memory requirements over time, but the fast is that they do, and the rate at which they increase their memory use seems to be dictated not by how much memory they intrinsically need but how much is available to them.

                                    There are notable exceptions. AI models, Image manipulation programs etc. do actually require enough memory to store the relevant data.

                                    On the other hand I have used a machine where the volume control sitting in the system tray used almost 2% of the system RAM.

                                    Static linking enables the cause of memory use to be more clearly identified, That enables people to see who is wasting resources. When people can see who is wasting resources, there is a higher incentive to not waste them.

                                    • pessimizer 3 hours ago

                                      This is a law of averages argument. There is no rational argument for bloat in order to protect software from bloat. This is like saying that it doesn't matter that we waste money, because we're going to spend the entire budget anyway.

                                      • Lerc 3 hours ago

                                        I think it's more like,

                                        If we pay money to someone to audit our books, we are more likely to achieve more within our budget.

                                    • pradn 3 hours ago

                                      It's possible for the OS to recognize that several pages have the same content (ie: doing a hash) and then de-duplicate them. This can happen across multiple applications. It's easiest for read-only pages, but you can swing it for mutable pages as well. You just have to copy the page on the first write (ie: copy-on-write).

                                      I don't know which OSs do this, but I know hypervisors certainly do this across multiple VMs.

                                      • slabity 3 hours ago

                                        Even if the OS could perfectly deduplicate pages based on their contents, static linking doesn't guarantee identical pages across applications. Programs may include different subsets of library functions and the linker can throw out unused ones. Library code isn't necessarily aligned consistently across programs or the pages. And if you're doing any sort of LTO then that can change function behavior, inlining, and code layout.

                                        It's unlikely for the OS to effectively deduplicate memory pages from statically linked libraries across different applications.

                                        • ChocolateGod 3 hours ago

                                          Correct me if I'm wrong Linux only supports KSM (memory-deduping) between processes when doing it between VMs, as QEMU provides information to the kernel to perform it.

                                          • yjftsjthsd-h an hour ago

                                            https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.ht...

                                            > KSM was originally developed for use with KVM (where it was known as Kernel Shared Memory), to fit more virtual machines into physical memory, by sharing the data common between them. But it can be useful to any application which generates many instances of the same data

                                            Although...

                                            > KSM only operates on those areas of address space which an application has advised to be likely candidates for merging, by using the madvise(2) system call

                                        • indigodaddy 3 hours ago

                                          I remember many years ago VMWare developed a technology to take advantage of these shared library savings across VMs as well.

                                          https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsp...

                                          • abhinavk 3 hours ago

                                            Linux’s KVM has it too. It’s called KSM.

                                            • indigodaddy 3 hours ago

                                              ah right, forgot about that as well

                                          • bbatha 3 hours ago

                                            Only if you don’t have LTO on. If you have LTO on you’re likely to use a fraction of the shared dependency size even across multiple apps.

                                            • odo1242 3 hours ago

                                              That is a good point.

                                            • whatshisface 3 hours ago

                                              Isn't 200MB a little large for a dependency? The Linux kernel is ~30MB.

                                              • odo1242 3 hours ago

                                                It's around the size of OpenCV, to be specific. I do see your argument though.

                                              • anotherhue 3 hours ago

                                                Assuming it is worth it with modern memory sizes (I think not), could this present a negative on NUMA systems? Forcing contention when a simple copy would have been sufficient?

                                                I assume there's something optimising that away but I'm not well versed.

                                                • 839302 18 minutes ago

                                                  Hola. No

                                                • tannhaeuser 2 hours ago

                                                  > Sure, we now have PipeWire and Wayland. We enjoy many modern advances and yet, the practical use for me is worse than it was 10 years ago.

                                                  That's what has made me leave Linux for Mac OS, and I'm talking about things like the touchpad (libinput) causing physical pain and crashing every couple minutes while the "desktop" wants to appeal to a hypothetical casual tablet user even Microsoft left long behind in the Windows 8.1 era (!). Mind, Mac OS is far from perfect and also regressing (hello window focus management, SIP, refactored-into-uselessness expose, etc, etc) but Mac OS has at least a wealth of new desktop apps created in this millenium to make up for it, unlike the Linux desktop struggling to keep the same old apps running as it's on a refactoring spree to fix self-inflicted problems (like glibc, ld.so) and then still not attracting new developers. I wish I could say, like the author, that containers are the solution, but the canonical example of browser updates also is a case of unwarranted and rampant complexity piling up without the slightest actual benefit for the user as the web is dying.

                                                  • imiric an hour ago

                                                    I agree to an extent, but Linux is actually very usable if you stick to common quality hardware, and a collection of carefully curated robust and simple software. I realize this is impractical for most people, but having a limited ecosystem is what essentially allows Apple to deliver the exceptional user experience they're known for.

                                                    The alternative of supporting an insane amount of hardware, and making it work with every combination of software, both legacy and cutting-edge, is much, much harder to achieve. It's a miracle of engineering that Linux works as well as it does, while also being developed in a globally distributed way with no central company driving the effort. Microsoft can also pull this off arguably better, but with a completely different development and business model, and all the resources in the world, so it's hardly comparable.

                                                    The sad part is that there is really no alternative to Linux if you want full control over your devices, while also having a decent user experience. Windows and macOS are walled gardens, and on the opposite spectrum BSDs and other niche OSs are nowhere near as usable. So the best we can do is pick a good Linux distro, something we're thankfully spoiled for choice, and customize it to our needs, which unfortunately does take a lot of time and effort. I still prefer this over the alternatives, though.

                                                    • desumeku 2 hours ago

                                                      > while the "desktop" wants to appeal to a hypothetical casual tablet user even Microsoft left long behind in the Windows 8.1 era This sounds like a GNOME issue, not a Linux one.

                                                      • nosioptar 39 minutes ago

                                                        I hate libinput and its lack of configuration so much I've contemplated going back to windows. I probably would have if there was a legit way to get win ltsc as an individual.

                                                      • BeetleB 3 hours ago

                                                        Been running Gentoo for over 20 years. It's as (un)stable now as it was then. It definitely has not regressed in the last N years.

                                                        I don't see this as a GNU/Linux problem, but a distro problem.

                                                        Regarding shared libraries: What do you do when a commonly used library has a security vulnerability? Force every single package that depends on it to be recompiled? Who's going to do that work? If I maintain a package and one of its dependencies is updated, I only need to check that the update is backward compatible and move on. You've now put a huge amount of work on my plate with static compilation.

                                                        Finally: What does shared libraries have to do with GNU/Linux? I don't think it's a fundamental part of either. If I make a distro tomorrow that is all statically compiled, no one will come after me and tell me not to refer to it as GNU/Linux. This is an orthogonal concern.

                                                        • NotPractical 26 minutes ago

                                                          > Some interesting talks and videos [...] GNU is Bloated! by Luke Smith

                                                          Strange reference because that video isn't relevant to the topic at all. It's about the difference between GNU coreutils command line options and other standards such as BSD and POSIX (the title is mostly a joke). In fact Luke Smith is vehemently against the idea of AppImages/Flatpaks/Snaps and believe they go against the spirit of GNU/Linux [1].

                                                          [1] https://www.youtube.com/watch?v=JPXLpLwEQ_E

                                                          • ChocolateGod 3 hours ago

                                                            > I am all in for AppImages or something like that. I don't care if these images are 10x bigger. Disk space now is plenty, and they solve the issue with "libFlac.8.so is missing

                                                            They don't solve the dependency problem, they solve a distribution problem (in a bad for security way), what the AppImage provides is up to the author and once you go out of the "Debian/Ubuntu" sphere, you run into problems with distributions such as Arch and Fedora whom provide newer packages or do things slightly differently. You can have them fail to run if you're missing Qt, or your Qt version does not match the version it was compiled against, same GTK, Mesa, Curl etc.

                                                            The moment there's an ABI incompatibility with the host system (not that uncommon), it breaks down. Meanwhile, a Flatpak produced today should run in 20 years time as long as the kernel doesn't break user-space.

                                                            They don't run on my current distribution choice of NixOS. Meanwhile Flatpaks do.

                                                            • AshamedCaptain 3 hours ago

                                                              I really doubt a X11 Flatpak from today will run in Fedora from 10 years time, much less 20 years time. They will break XWayland (in the name of "security") much before that. They will break D-Bus much before that.

                                                              In addition, the kernel breaks ABI all the time; sometimes this is partially workarounded thanks to dynamic linking (e.g. OSS and solutions like aoss). Other times not so much.

                                                              I feel that everytime someone introduces a "future proof" solution for 20 years they should make the effort to run 20 year old binaries on their Linux system of today and extrapolate from it.

                                                            • hi-v-rocknroll 21 minutes ago

                                                              This is a "Unabomber"-style prescription to the problem. The problem isn't shared libraries and the solution isn't static linking everything because that's wasteful for repetitive binary code duplicated N times consuming disk and RAM pointlessly. The problem is solved by management and cooperative sharing of shared libraries that don't rely on a fixed, singleton location or only allowing a single configuration of a shared library, but allow side-by-side installations of multiple version series and multiple configurations. Nix mostly solves these problems, but still has teething problems of getting things to work right especially for non-code shared file locations and dlopen program plugins. I think Nix is over-engineered and learning-curve user-hostile but is generally a more correct approach. There is/was a similar project from what was Opscode called Habitat that did something similar. Before that, we used Stow and symlink forests.

                                                              • anon291 3 hours ago

                                                                The main issue is mutability, not shared objects. Shared objects are a great memory optimization for little cost. The dependency graph is trivially tracked by sophisticated build systems and execution environments like NixOS. We live in 2024. Computers should be able to track executable dependencies and keep around common shared object libraries. This is a solved technical problem.

                                                                • tremon 36 minutes ago

                                                                  Immutabililty actually destroys the security benefits that shared objects bring, because with every patch the location of the library changes. So you're back to the exact same situation as without dynamic linking: every dependency will need to be recompiled anyway against the new library location. And that means that even though you may have a shared object that's already patched, every other package on your system that's not yet been recompiled is still vulnerable.

                                                                • mfuzzey 3 hours ago

                                                                  I haven't seen such problems on Debian or Ubuntu guess it's par for the course with a bleeding edge distro.

                                                                  The author seems to be focusing on the disk space advantage and claiming it's not enough to justify the downsides today. I can understand that but I don't think disk space savings are the main advantage of shared dependencies, rather it's centralized security updates. If every package bundles libfoo what happens there's a security vulnerability in libfoo?

                                                                  • cbmuser 2 hours ago

                                                                    > If every package bundles libfoo what happens there's a security vulnerability in libfoo?

                                                                    That’s actually the key point that many people in this discussion seem to miss.

                                                                    • pmontra an hour ago

                                                                      What happens is that libfoo gets fixed, possibly by the maintainers of the distro, and all the apps using it are good to go again.

                                                                      With multiple versions bundled to multiple apps, a good number of those apps will never be updated, at least not in a timely manner, and the computer will be left vulnerable.

                                                                      • sunshowers an hour ago

                                                                        Then you get an alert that your libfoo has a vulnerability (GitHub does a pretty good job here!) and you roll out a new version with a patched libfoo.

                                                                        • kelnos an hour ago

                                                                          As a user, I don't want to assume that every single maintainer of every single app that uses (a statically linked) libfoo is keeping up to date with security issues in their dependencies and has the time and ability to promptly update their software.

                                                                          But I feel pretty safe believing that the debian libfoo package maintainer is on top of things and will quickly release an update to libfoo.so that all apps running on my system will be able to take advantage of.

                                                                          • sunshowers 40 minutes ago

                                                                            That's fair, but the Debian maintainer could just as well update libfoo.a and kick off builds of all the reverse transitive dependencies of libfoo.a.

                                                                            • tremon 33 minutes ago

                                                                              Specifically in the case of Debian, who is going to pay for all the additional infrastructure (build servers) that switching to dependency vendoring/static linking would require?

                                                                              • sunshowers 30 minutes ago

                                                                                Good question. I think someone would have to run the numbers here!

                                                                    • anotherhue 3 hours ago

                                                                      Since I switched to nixos all these articles read like people fiddling with struct-packing and optimising their application memory layout. The compiler does it well enough now that we don't have to, so it is with nix and your application filesystem.

                                                                      • __MatrixMan__ 3 hours ago

                                                                        I had a similar feeling during the recent crowdstrike incident. Hearing about how people couldn't operate their lathe or whatever because of an update, my initial reaction was:

                                                                        > Just boot yesterday's config and get on with your life

                                                                        But then, that's one of those NixOS things that we take for granted.

                                                                        • sshine 32 minutes ago

                                                                          Not just NixOS. Ubuntu with ZFS creates a snapshot of the system on every `apt install` command.

                                                                          But yeah, it’s pretty great to know that if your system fails, just `git restore --staged` and redeploy.

                                                                        • thot_experiment 3 hours ago

                                                                          Using shared libraries is optimizing for a very different set of constraints than nixos, which iirc keeps like 90 versions of the same thing around just so everyone can have the one they want. There are still people who are space constrained. (I haven't touched nix in years so maybe i'm off base on this)

                                                                          > The compiler does it well enough now that we don't have to

                                                                          You know I see people say this and then I see some code with some nested loops running 2x as fast as code written with list comprehensions and I remember that it's actually.

                                                                          "The compiler does it well enough now that we don't have to as long as you understand the way the compiler works at a low enough level that you don't use patterns that will trip it up and even then you should still be benchmarking your perf because black magic doesn't always work the way you think it works"

                                                                          Struct packing too can still lead to speedups/space gains if you were previously badly aligned, which is absolutely something that can happen if you leave everything on auto.

                                                                          • matrss 2 hours ago

                                                                            > Using shared libraries is optimizing for a very different set of constraints than nixos, which iirc keeps like 90 versions of the same thing around just so everyone can have the one they want.

                                                                            This isn't really true. One version of nixpkgs (i.e. a specific commit of https://github.com/NixOS/nixpkgs) generally has one version of every package and other packages from the same nixpkgs version depending on it will use the same one as a dependency. Sometimes there are multiple versions (different major versions, different compile time options, etc.) but that is the same with other distros as well.

                                                                            In that sense, NixOS is very similar to a more traditional distribution, just that NixOS' functional package management better encapsulates the process of making changes to its package repository compared to the ad-hoc nature of a mutable set of binary packages like traditional distros and makes it possible to see and rebuild the dependency graph at every point in time while a more traditional distro doesn't give you e.g. the option to pretend that it's 10 days or months ago.

                                                                            You only really get multiple versions of the same packages if you start mixing different nixpkgs revisions, which is really only a good idea in edge cases. Old ones are also kept around for rollbacks, but those can be garbage collected.

                                                                            • cbmuser 2 hours ago

                                                                              Multiple versions of a shared library is a pure nightmare if you actually care about security.

                                                                            • anotherhue 3 hours ago

                                                                              No argument, if you're perf sensitive and aren't benchmarking every change then it's a roll of a dice as to whether llvm will bless your build.

                                                                              The usual claim stands though, on a LoC basis a vanishingly small amount of code is perf sensitive (embedded likely more TBF)

                                                                              • thomastjeffery 3 hours ago

                                                                                The problem with Nix is that it's a single monolithic package archive. Every conceivable package must go somewhere in the nixpkgs tree, and is expected to be as vanilla as possible.

                                                                                On top of that, there is the all-packages.nix global namespace, which implicitly urges everyone to use the same dependency versions; but in practice just results in a mess of redundant names like package_version.1.12_x-feature-enabled...

                                                                                The move toward flakes only replaces this problem with intentional fragmentation. Even so, flakes will probably end up being the best option if it ever gets coherent documentation.

                                                                              • jeltz 3 hours ago

                                                                                The C compiler does not optimize struct packing at all. Some languages like Rust allows optimizing struct layouts but even for Rust struct layout can matter if you care about cache locality and vector operations.

                                                                                • cbmuser 2 hours ago

                                                                                  The compiler takes care of vulnerabilities?

                                                                                  There was a recent talk that discussed the security nightmare with dozens of different versions of shared libraries in NixOS and how difficult it is for the distribution maintainers to track and update them.

                                                                                • kelnos an hour ago

                                                                                  I feel like these sorts of articles come up every so often, but still don't buy it.

                                                                                  While I agree that disk usage is no longer a driver for shared libraries, memory usage still is, to some extent. If I have 50 processes using the same libaray (and I do), that shared library's readonly data sections get loaded into RAM exactly once. That's a good thing.

                                                                                  But even if that problem wasn't an issue, security is still a big one for me. When my distro releases a new version of a library package to fix a security issue, every single package that uses it gets the security fix, without its each maintainer having to rebuild each package against the fixed version. (Sure, some distros manage this more centrally and won't have to wait for individual maintainers, but not all are like that.)

                                                                                  I don't have to wonder what app has been fixed and what hasn't been. I don't have to make sure every single AppImage/Flatpak/Snap on my system that depends on that library (which I may not even know) gets updated, and possibly disable or uninstall those that haven't been, until they have.

                                                                                  I like shared libraries, even with the problems they sometimes (rarely!) cause.

                                                                                  • neilv an hour ago

                                                                                    > Just a normal update that I do many times per week. [...] At this point, using GNU/Linux is more like a second job, and I was so stoked when this was not a case anymore in the past. This is why I feel like the last 10 years were a regression disguised as progress.

                                                                                    But the author already knows the solution...

                                                                                    > My best memories were always with Debian. Just pure Debian always proved to be the most stable system. I never had issue or system breaking after an update. I can't say the same for Fedora.

                                                                                    You've always had the power... tap your heels together three times, and just install Debian Stable.

                                                                                    • kelnos an hour ago

                                                                                      Hell, these days I switch from Debian stable to testing around 6 months after each stable release, and I still have fewer issues than I've had with most other distros in the past.

                                                                                    • kelnos 30 minutes ago

                                                                                      > How come GNU/Linux is worse than it was 10 years ago?

                                                                                      This hasn't been my experience at all, as someone who's been using it for more than 20 years now. Today, for the most part I don't have to tinker with anything, or think about the inner workings of my distro. Even just a decade ago that wasn't true.

                                                                                      • AshamedCaptain 3 hours ago

                                                                                        If anything, I'd argue that the "abysmal state of GNU/Linux" is because programs now tend to bundle their own dependencies, and not the opposite.

                                                                                        • sebastos 2 hours ago

                                                                                          Grrr - I strongly, viscerally disagree!

                                                                                          All of these new dependency bundling technologies were explicitly created to get out from under the abysmal state of packaging - from Docker (in some ways) and on to snap, flat pack, appimage, etc. This state of affairs was explained in no uncertain terms and widely repeated in the various manifestos associated with those projects. The same verbiage is probably still there if you go look. It seems crazy to act as if this recent memory is obscured by the mists of time, leaving us free to speculate on the direction of causality. We all lived this, and that’s not how it happened! Besides, in your telling, thousands of people and multiple separate organizations poured blood sweat and tears into these various bundling technologies for no good reason. Why’d they do that? I can’t help but suspect your answer is something like “it all worked fine, people just got lazy. Just simply work with the distro maintainer to get your package accepted and then … etc etc etc”. What do people have to do to communicate with the Linux people that this method of distribution is sucky and slow and excruciating? They’ve built gigantic standalone ecosystems to avoid doing it this way, yet the Linux people are still smugly telling themselves that people are just too stupid and lazy to do things The Right Way.

                                                                                          • zrm 2 hours ago

                                                                                            > thousands of people and multiple separate organizations poured blood sweat and tears into these various bundling technologies for no good reason. Why’d they do that?

                                                                                            Because stability and rapid change are incompatible, and they wanted rapid change.

                                                                                            Which turns into a maintenance nightmare because now every app is using a different, incompatible version of the same library and somebody has to backport bug fixes and security updates to each individual version used by each individual package. And since that's a ton of work nobody wants to do, it usually doesn't get done and things packaged that way end up full of old bugs and security vulnerabilities.

                                                                                            • sunshowers an hour ago

                                                                                              I'm a big believer in not getting in the way when people want to build stuff. I get GitHub alerts for vulnerabilities in the dependencies of my Rust programs, and I release new versions whenever there's a relevant vulnerability.

                                                                                              My programs work, pass all tests on supported platforms, and don't have any active vulns. Forcing dynamic binding on me is probably not a good idea, and certainly not work I want to do.

                                                                                              • kelnos 38 minutes ago

                                                                                                > I get GitHub alerts for vulnerabilities in the dependencies of my Rust programs, and I release new versions whenever there's a relevant vulnerability.

                                                                                                That's great, but my confidence is very low that most maintainers are like you.

                                                                                                • sunshowers 19 minutes ago

                                                                                                  Sure! I'm an optimist at heart and think a lot of people can learn though :)

                                                                                                  And note that static linking doesn't prevent third-party distributors like Linux maintainers from patching software. It's just that a lot of the current tooling can't cope too well with tracking statically linked dependencies. But that's just a technical problem.

                                                                                                  GitHub's vulnerability tracking has been fantastic in this regard.

                                                                                                • AshamedCaptain an hour ago

                                                                                                  Do you keep multiple branches of your programs, including one where you do not add new features but only bugfixes and such security updates?

                                                                                                  (And I am skeptical of claims that leaf developers can keep up with the traffic of security updates)

                                                                                                  • sunshowers an hour ago

                                                                                                    I build my programs to be append-only, such that users can always update to new versions with confidence.

                                                                                                    For example, I'm the primary author and maintainer of cargo-nextest [1], which is a popular alternative test runner for Rust. Through its history it has had just one regression.

                                                                                                    If I did ever release a new major version of nextest, I would definitely keep the old branch going for a while, and make noises about it going out of support within the next X months.

                                                                                                    Security updates aren't that common, at least for Rust. I get maybe 5-6 alerts a year total, and maybe 1-2 that are actually relevant.

                                                                                                    [1] https://nexte.st/

                                                                                                    • AshamedCaptain an hour ago

                                                                                                      > I build my programs to be append-only, such that users can always update to new versions with confidence.

                                                                                                      And in this wonderful world where developers are competent enough to manage this, and therefore there are no issues when libraries are updated (append-only, right?).... why do you have a problem with shared linking again? Or is this a case where you think yourself as an "above average" programmer?

                                                                                                      • sunshowers an hour ago

                                                                                                        I think the difference is that library interfaces tend to be vastly more complex than application interfaces, partly because processes form fairly natural failure domains (if my application does something wrong it exits with a non-zero code, but if my library does something wrong my entire program is suddenly corrupted.)

                                                                                                        There are also significant benefits to static linking (such as inlining and LTO) that are not relevant across process boundaries.

                                                                                                        But yes, in a sense I'm pushing the problem up the stack a bit.

                                                                                                        > is this a case where you think yourself as an "above average" programmer?

                                                                                                        I've been very lucky in life to learn from some of the best minds in the industry.

                                                                                              • AshamedCaptain an hour ago

                                                                                                We have such a myriad "dependency bundling technologies", dating back over more than a decade by now, and the situation has only been made worse.

                                                                                                It's way too comfortable for _developers_ to bundle dependencies. That already explains why there is pressure to do so. You yourself look at this with developer glasses. I think users couldn't care less or may even actively avoid dependency bundling. Cause my impression, as a user, is that not only they almost never work right, but they actually make compatibility _harder_, not easier. And they decrease desktop environment integration, they increase overhead in every metric, they make patching things harder, etc. etc. Can you find other reasons why all these technologies you mention are not flying at all for desktop Linux users?

                                                                                                And speaking as a developer, the software I develop is usually packaged by distros (and not myself), so I'm very well aware of the "sweating" involved. And despite that, I will say: it is not as bad as the alternatives presented.

                                                                                            • zajio1am an hour ago

                                                                                              Main argument for shared libraries is not space reduction, but uniformity. I do not want in my OS ten different versions of GTK, Freetype, Guile or Lua, each with its own idiosyncrasies or bugs.

                                                                                              • shams93 3 hours ago

                                                                                                That's Fedora, its a bleeding edge experimental distro when it comes to the desktop at least. Ubuntu has become really popular because it is stable and reliable. My friends who use linux to perform live electronic music all use ubuntu studio and have been for over 15 years now.

                                                                                                • yesco 3 hours ago

                                                                                                  While Ubuntu is a great distro to get things up and running, a lot of their decision making around snap has begun to make me hesitant to recommend it to people. While there is the political/open source angle in regards to how they are handling the servers for snap, my main issue is primarily the stability. Randomly I'll install something with apt and it will not give me an apt package, it will give me a non-standard snap package that becomes difficult to troubleshoot.

                                                                                                  In the case of firefox, it basically makes it less stable than a nightly build considering how often it crashes, and subtly breaks screen sharing in weird non-obvious ways. This experience has me guessing that Canonical probably cares more about server Ubuntu now than it does about desktop Ubuntu, which is a real shame.

                                                                                                  While there are workarounds, I specifically endorsed Ubuntu in the first place to many people because these kinds of workarounds used to not be necessary. It's a real bummer honestly, not sure what else to recommend in this category either.

                                                                                                  • shams93 3 hours ago

                                                                                                    However these days its much easier to run a linux desktop on a system that comes with it, some of these laptops have driver issues where they can work but then can run into things like thermal and display issues due to closed code with very complex non-standard low level drivers.

                                                                                                  • jeltz 3 hours ago

                                                                                                    Seems the author should just stop using a experimental bleeding edge distro like Fedora and go back to Debian Stable for example.

                                                                                                    • cherryteastain 3 hours ago

                                                                                                      AppImage does not always solve these dependency issues. I've had AppImages refuse to run because of e.g. missing Qt libraries (at least on Debian + Gnome). Flatpak and Snap are much better solutions for this problem.

                                                                                                      As for the Nvidia issues, especially the "system refuses to boot" kind, that's on Nvidia.

                                                                                                      • amlib 2 hours ago

                                                                                                        As soon as nvidia was mentioned in the article a chill went down my spine reminding me of my treacherous experience trying to use ubuntu with the nvidia drivers back in 2007. Every second reboot the drivers would break and I would wind up having to reinstall it trough a VT. Re-installing the system multiple times didn't matter, following multiple different guides didn't matter, following nvidias own instructions to the teeth on a fresh system... didn't matter. Those drivers on Archilinux would also constantly break, but at least it was as a result of the system updating and not just rebooting. It took many years but I've been nvidia free for 6 years now and my system couldn't be more stable.

                                                                                                        I haven't seen Fedora break in the last 2 years I've been using it, aside from a beta release upgrade I was curious to test that went wrong. I really think it's silly to put all the blame on shared libraries when there is a 99% chance it's the nvidia drivers fucking up again.

                                                                                                      • cogman10 3 hours ago

                                                                                                        Perhaps this is a hard/impossible problem to solve, but I feel like the issue isn't so much the shared libraries, it's the fact that you need different versions of a shared library on a system for it to function. As a result, the interface for communicating that "I have version 1, 2, 3" has basically just been linking against filenames.

                                                                                                        But here's the part that feels wasteful that I wish could be solved. From version 1.0 to 2.0, probably 90% of most libraries are completely unchanged. Yet still we duplicate just to solve the problem.

                                                                                                        What if instead of having a .so per version, we packaged together a computable shared object and had the compiler/linking system incorporate versions when making a link? From there, we could turn the requests for versions be something like "Hey, I need 1.2.3" and the linking system say "1.2.3 consists of these chunks from the shared repository". That could be manifest and cached by the OS.

                                                                                                        For example, imagine fooLib has functions foo, bar, baz in version 1.2.3 and foo, bar, baz, blat. in 1.4.3. You could have a small hash of each of the functions in foo and store off the hash of those functions in a key value store for foo. From there you could materialize the runtime of foo 1.2.3 when requested by a library.

                                                                                                        New versions would essentially be the process of sending down the new function chunks and there would be no overriding of versions. But, it would also give OS maintainers a route to say "Actually, anything that requests version 1.2.3 will get 1.2.3.1-patched because of a CVE". Or you could even hotpatch in the same function on all versions in the case of CVE having a more targeted patching system.

                                                                                                        I've often wondered about if we could do a really granular dependency graph like this. Mainly because I like the idea of only shipping out the smaller changes and not 1gb of stuff because of what might break.

                                                                                                        • dannyobrien 3 hours ago

                                                                                                          I wonder what the author thinks of Nix/Guix-type distributions? Seems like that's something that gets the best of both worlds, with a minimum (but not non-zero) amount of futzing around.

                                                                                                          • zrm 2 hours ago

                                                                                                            The old reason for shared libraries was to share memory/cache and not waste disk space. That's not gone, but maybe it's less important than it used to be.

                                                                                                            The modern reason is maintenance. If 100 apps are using libfoo, in practice nobody is going to maintain a hundred separate versions of libfoo. That means your choices are a) have hundreds of broken versions nobody is maintaining spread all over, or b) maintain a small number of major releases forked at the point where compatibility breaks, so that every version of 1.x.x is compatible with the latest version of 1.x.x and every version of 2.x.x is compatible with the latest version of 2.x.x, and somebody is maintaining a recent version of each series, so you can include it as a shared library that everything else links against.

                                                                                                            But then you need libraries to limit compatibility-breaking changes to once or twice a decade.

                                                                                                            • nektro 11 minutes ago

                                                                                                              sounds like OP would love nixos

                                                                                                              • ramon156 2 hours ago

                                                                                                                > This is why I am a massive proponent of AppImages

                                                                                                                I joined Linux late-game, and only recently discovered that AppImages were in some cases much nicer to work with. The thing that I was missing though was.. a package manager. If there was any distro that would build their package manager around AppImages, then I would gladly use it.

                                                                                                                • idle_zealot 2 hours ago

                                                                                                                  What would that even mean? A package manager is for tracking and installing dependencies, and putting things in the right place in the filesystem. AppImages bundle their dependencies and don't expect to unpack into a filesystem, you just put them in an /apps directory. Do you just mean that you want a repository of AppImages to search? Otherwise your "package manager" is 'curl $APPIMAGE_URL > ~/apps/$APP_NAME' and 'rm ~/apps/$APP_NAME'.

                                                                                                                • enriquto 3 hours ago

                                                                                                                  The world needs a static linux distribution more than ever.

                                                                                                                  • ChocolateGod 3 hours ago

                                                                                                                    I can't think of how much worse Heartbleed would of been if everything was statically compiled.

                                                                                                                    • mcflubbins 3 hours ago
                                                                                                                    • Dwedit 3 hours ago

                                                                                                                      How about the part where Shared Objects take a performance penalty due to being position-independent code?

                                                                                                                      • kelnos 33 minutes ago

                                                                                                                        But they also can reduce memory pressure and cache misses, so maybe that evens out.

                                                                                                                      • DemocracyFTW2 3 hours ago

                                                                                                                        > the issue with "libFlac.8.so is missing" and I have version 12 installed

                                                                                                                        I believe this is one of the core issues here, and it is nicely illustrated by comparing what e.g. one Isaac Schlueter (isaacs of npm fame) thinks about dependencies and the critique to this offered by Rich Hickey (of Closure fame).

                                                                                                                        Basically what Isaac insists on is that Semantic Versioning can Save Us from dependency hell if we just apply it diligently. The advancement that npm offers in this regard is that different transitive dependencies that refer to different versions of the same module can co-exist in the dependency tree, which is great.

                                                                                                                        But sometimes, just sometimes folks, you need to have two versions of the same dependency for the same module, and this has taken a lot of effort to get into the system, because of the stubborn insistence that somehow `foo@4.1` and `foo@4.3` should be the 'same only different', and that really it makes no sense to use both `foo@3` and `foo@4` from the same piece of code, because they're just two versions of the 'same'.

                                                                                                                        Rich Hickey[1] cuts through this and asserts that, no, if there's a single bit of difference between foo version A and foo version B, then they're—different. In fact, both pieces of software can behave in arbitrarily different ways. In the real world, they most of the time don't, it's true, but also in the real world, the part of knowledge that I can be really sure of is that if foo version A is not bit-identical to foo version B, then those are different pieces of software, potentially (and likely) with different behaviors. Where those differences lay, and whether they will impact my particular use of that software remains largely a matter of conjecture.

                                                                                                                        Which brings me back to the OP's remark about libFlac.8.so conflicting with libFlac.12.so. I think they shouldn't conflict. I think we have to ween us off the somewhat magical thinking that we just need an agreement on what is a breaking change and what is a 'patch' and we can go on pretending that we can share libraries system-wide on a 'first-name basis' as it were, i.e. disregarding their version numbers.

                                                                                                                        I feel I do not understand Linux deeply enough but my suspicion has been for years now that we don't have to abolish shared libraries altogether if only we would stop to see anything in libFlac.8.so that ties it particularly closely to libFlac.12.so. Well there is, probably, a lot of commonalities between the two, but in principle there need not be any, and therefore the two libraries should be treated like any two wholly independent pieces of software.

                                                                                                                        [1] https://youtu.be/oyLBGkS5ICk?list=PLZdCLR02grLrEwKaZv-5QbUzK...

                                                                                                                        • EdwardDiego 4 hours ago

                                                                                                                          Fedora is a "move fast maybe sometimes break things" distro, e.g., early adoption of Btrfs as the default, Wayland etc.

                                                                                                                          So yeah, these things sometimes happen.

                                                                                                                          • andrewstuart 3 hours ago

                                                                                                                            If “gnu” gets headline billing for providing some user land tools then really “systemd/Linux” should be the current operating system title because systemd pervades the distros mentioned in this article. In many ways systemd IS the operating system outside the kernel.

                                                                                                                            • johnea 3 hours ago

                                                                                                                              And in fact systemd/linux is what I consider the biggest digression in linux over the last decade.

                                                                                                                              It pushes for a single upstream for all of userland. With IBM being that single upstream.

                                                                                                                              After decades, I'll leave archlinux in my next migration because of this.

                                                                                                                              • sho_hn 3 hours ago

                                                                                                                                Neither of the systemd lead maintainers works for IBM (or the same company).

                                                                                                                                • ChocolateGod 3 hours ago

                                                                                                                                  > It pushes for a single upstream for all of userland. With IBM being that single upstream.

                                                                                                                                  I hate to break it to you but what do you think the GNU Project is?

                                                                                                                                  • pessimizer 2 hours ago

                                                                                                                                    It's astounding how easily people slipped from defending Red Hat owning so much of Linux with their giant labyrinthine subsystems to literally defending IBM ownership. When Google or Facebook buy IBM, they'll be calling everyone purists for even mentioning it.

                                                                                                                                • senzilla 3 hours ago

                                                                                                                                  The abysmal state of GNU/Linux is exactly why I moved to OpenBSD many years back. It's small, simple and very stable.

                                                                                                                                  The BSDs are definitely not for everyone, and they come with their own set of tradeoffs. However, it is safe to say that all BSDs are better today than 10 years ago. Small and steady improvements over time.

                                                                                                                                  • einpoklum 3 hours ago

                                                                                                                                    > I am all in for AppImages or something like that

                                                                                                                                    WTF? On the contrary!

                                                                                                                                    > And Snaps and Flatpaks tried to solve some of these things,

                                                                                                                                    Made things worse.

                                                                                                                                    > I don't care if these images are 10x bigger.

                                                                                                                                    ... the size is just part of the problem. The duplication is another part. A system depending on a 100K different versions of libraries/utils instead of, oh, say, 5K. And there's the memory usage, as others mentioned.

                                                                                                                                    Also, there really isn't more trouble locating shared libraries today than 10 years ago; if anything, the opposite is true. Not to mention how there is even more searchable "crowd support" resources today than back then, for when you actually do have such issues.

                                                                                                                                    So...

                                                                                                                                    > How come GNU/Linux is worse than it was 10 years ago?

                                                                                                                                    I think it's actually better overall:

                                                                                                                                    * Less gotchas during installation

                                                                                                                                    * Better apps for users' basic needs (e.g. LibreOffice)

                                                                                                                                    * Less chance of newer hardware not being supported on Linux (it still happens though)

                                                                                                                                    but if you asked me what is worse, then:

                                                                                                                                    1. systemd.

                                                                                                                                    2. Further deterioration of the GNOME UI. Although TBH a lot of that sucked 10 years ago as well (e.g. the file picker)

                                                                                                                                    3. Containerization instead of developers/distributors having their act together

                                                                                                                                    but certainly not what the author is trying to push. (shrug)

                                                                                                                                    • ChocolateGod 3 hours ago

                                                                                                                                      > * Less chance of newer hardware not being supported on Linux (it still happens though)

                                                                                                                                      This is one area I think Linux is really bad on compared to Windows. AMD makes a new graphics card, it pushes out a driver update for Windows and it's all golden.

                                                                                                                                      On Linux? They have to spend months getting it into the kernel release cycle, then wait on distributions to test that update, trickle it down to users and if you're on some kind of LTS distribution you might as well not bother.

                                                                                                                                      • trelane an hour ago

                                                                                                                                        It's not like they only start developing the driver when the card is released. In either case, they work with the OS vendor for months or years prior to develop the driver. One often sees this in e.g. Intel drivers coming out in the Linux kernel well before the hardware is available to the consumer.

                                                                                                                                        You're not entirely wrong, either, though; it's more of a concern with LTS support. Though this is reduced somewhat by most things not actually needing custom drivers by using existing standards, e.g. HID, and/or by userspace drivers.

                                                                                                                                        And, of course, you don't have it at all if you only buy hardware with Linux pre-installed and supported by the vendor. You know, like you do with Windows and Mac.

                                                                                                                                    • marssaxman 4 hours ago

                                                                                                                                      > Shared dependencies were a mistake!

                                                                                                                                      Couldn't agree more.

                                                                                                                                      • fullspectrumdev 3 hours ago

                                                                                                                                        It strikes me that Linux seems to have basically reinvented DLL Hell.

                                                                                                                                        • bachmeier 2 hours ago

                                                                                                                                          Not really. This has always been possible, and by definition, it has to be possible. If you use a stable distro and stick with your distro's repositories it's not a problem. If you want to install stuff outside the repos and you aren't willing to compile it yourself, it's absolutely going to be a problem.