• tharne 5 hours ago

    This is something I've never totally understood when it comes to Rust's much loved memory safety vs. C's lack of memory safety. When it comes to replacing C code with Rust, aren't we just trading memory risk for supply chain risk?

    Maybe one is more important than the other, I don't know. All the languages I use for work or hobbies are garbage collected and I'm not a security professional. But it does seem like the typical Rust program with it's massive number of "cargo adds" is an enormous attack surface.

    • bluGill 4 hours ago

      The supply chain attack always existed. C because it didn't have a package manager made it slightly harder in that a dependency wouldn't be automatically updated, while Rust can do that. However this is very slight - in linux many people use libraries from a package manager which gets updated when there is a new release - it wouldn't be hard to get a bad update into a package (xz did that).

      If you have packages that don't come from a package manager - windows install, phone installs, snap, docker, flatpack, and likely more you have a different risk - a library may not have been updated and so you are vulnerable to a known flaw.

      There is no good/easy answer to supply chain risk. It is slightly different on Rust because you can take the latest if you want (but there is plenty of ability to stay with an older release if you want), but this it doesn't move the needle on overall risk.

      • ploxiln 21 minutes ago

        > it wouldn't be hard to get a bad update into a package (xz did that)

        I'd actually call that quite difficult. In the case of xz it was a quite high-effort "long con" the likes of which we've never seen before, and it didn't quite succeed in the end (it was caught before rolling out to stable distros and did not successfully exploit any target). One huge close call, but so far zero successes, over almost 30 years now.

        But typo-squatting and hijacked packages in NPM and PyPI, we've seen that 100s of times, many times successfully attacking developers at important software companies or just siphoning cryptocurrency.

        • semi-extrinsic 2 hours ago

          Au contraire, the Rust (or other "modern" lang) dependencies come in addition to the OS dependencies. The C (or other "old" lang) programs typically have very few dependencies apart from the OS, with absolutely glacial release cycles. And unless you're on Arch or similar, the OS package manager updates are primarily just minor version bumps.

          It seems pretty indisputable that "modern" langs substantially increase your supply chain attack surface. Of course some (like JS) are worse than others.

          As a result, whether the net security benefit of using Rust vs C is positive or negative depends heavily on the program in question. There is a huge difference between e.g. Firefox and Wireguard in this respect.

          • bluGill 2 hours ago

            Anyone writing C quickly learns to find third party libraries that do lots of things for them.

            • antonvs 36 minutes ago

              > The C (or other "old" lang) programs typically have very few dependencies

              Say what now? Have you ever worked on a project that uses C?

              We were using 3rd party dependencies in C in the 1980s.

              Here's a more current list for C and C++: https://github.com/fffaraz/awesome-cpp

          • rpcope1 2 hours ago

            That same problem bites in other ways too. There was a discussion in I think the Debian mailing lists around Rust applications potentially being slower to patch because everything gets linked in statically (so you can't just patch libssl and pull a new so). I imagine if you have one compromised dependency, it's now going to mean pulling a new version for each and every single package that may have incorporated it, which feels like it's going realistically to mean AAA game assets size updates weekly.

            • MattPalmer1086 4 hours ago

              It's rare not to use open source libraries no matter the language. Maybe C code tends to use fewer, I don't know.

              This doesn't prove anything of course, but the only High severity vulnerability I had in production this year was a C library. And the vulnerability was a buffer overflow caused by lack of memory safety.

              So I don't think it's a simple trade off of one sort of vuln for another. Memory safety is extremely important for security. Supply chain attacks also - but using C won't defend you from those necessarily.

              • jacquesm 2 hours ago

                It is extremely rare for a C build environment to start downloading a massive number of unaudited dependencies from a poorly organized pile of endless layers of repositories. What you might have is a couple of dependencies and then you build the rest yourself. To have 800 unknown co-authors on your 'hello world' app would not happen in C.

                There are of course still other vectors for supply chain attacks. The toolchain itself, for instance. But then you fairly quickly get into 'trusting trust' level issues (which are very real!) and you will want an OS that has been built with known clean tools as well.

                • acdha 2 hours ago

                  The flip side is that the median C program has more first-party security bugs and likely has third-party bugs included as copies which will be harder to detect and replace. I remember years ago finding that a developer had copied something like a DES implementation but modified it so you had to figure out what they’d customized as part of replacing it.

                  • jacquesm 2 hours ago

                    So far I have not found this to be the case. Usually stuff is fairly high quality and works for the use cases that I throw at it. Your example sounds like very risky behavior. That stuff is super hard to get exactly right.

                • immibis 3 hours ago

                  There's no canonical package manager or packaging convention for C and C++ libraries, since they predate that sort of thing. As a result, there's a lot more friction to using dependencies and people tend to use less of them. Common OS libraries are fair game, and there are some large widely used libraries like boost, but it's extremely unusual for a C or C++ project to pull in 20+ very small libraries. A chunk of functionality has to be quite big and useful before it overcomes the friction of making it a library.

              • udev4096 5 hours ago

                Instead of securing the "chain", we should instead isolate every library we import and run it under a sandbox. We should adopt the model of QubesOS. It follows security by isolation. There are lots of native sandboxing in linux kernel. Bubblewrap, landlock, gvisor and kata (containers, not native), microVMs, namespaces (user, network), etc

                • zzo38computer 15 minutes ago

                  It might work if you run the library in a separate process. This will work better on a capability-based system (with proxy capabilities; this would have other benefits as well and not only security) than on Linux, although it might still be possible to implement. An alternative way to implement library sandboxing would be to implement the sandboxing at compile time, which would work on any system and not require a separate process, and has both advantages and disadvantages compared with the other way.

                  • whytevuhuni 4 hours ago

                    I don't know what the next programming language after Rust will look like, but it will definitely have built-in effects and capabilities.

                    It won't fix everything (see TARmageddon), but left-pad-rs's build.rs file should definitely not be installing a sudo alias in my .bashrc file that steals my password when I cargo build my project.

                    • darrenf 3 hours ago

                      Can't help but think that Perl's tainted mode (which is > 30yrs old) had the right idea, and it's a bit strange how few other languages wanted to follow its example. Quoting `perldoc perlsec`:

                      You may not use data derived from outside your program to affect something else outside your program--at least, not by accident. All command line arguments, environment variables, locale information (see perllocale), results of certain system calls ("readdir()", "readlink()", the variable of "shmread()", the messages returned by "msgrcv()", the password, gcos and shell fields returned by the "getpwxxx()" calls), and all file input are marked as "tainted". Tainted data may not be used directly or indirectly in any command that invokes a sub-shell, nor in any command that modifies files, directories, or processes, with the following exceptions: [...]

                      • bluGill 4 hours ago

                        I hope you are right, but fear that there is no way to make such a thing that is usable. You likely end up with complex permissions that nobody understands and so you just "accept all", or programs that have things they must do under the same protection as the evil thing you want to block.

                        • marcosdumay 3 hours ago

                          > but fear that there is no way to make such a thing that is usable

                          The function declarations declare every action it can do on your system, and any change adding new ones is a breaking change on the library.

                          We've knew how to do it for ages. What we don't have is a good abstraction to let the compiler check them and transform the actions into high-level ones as they go through the stack.

                          • bluGill 2 hours ago

                            You end up just declaring you can do anything because every time a function legitimately needs to do something every other function up the chain needs to be updated to do that same thing (even if in practice that can never happen but the compiler can't prove it).

                            Like you say we don't have a good abstraction for this.

                            • marcosdumay 2 hours ago

                              > every time a function legitimately needs to do something

                              Hopefully, you know exactly what a function needs to do when you write it.

                              > every other function up the chain needs to be updated

                              There are types reuse and composition to deal with that. Many languages with advanced type systems do composition badly, but it's still there.

                      • criemen 3 hours ago

                        > we should instead isolate every library we import and run it under a sandbox

                        I don't see how that'd be possible. Often we want the library to do useful things for the application, in the context of the application. What would incentivize developers to specify more fine-grained permissions per library than the union of everything their application requires?

                        I see more use in sandboxing entire applications, and giving them more selective access than "the entire user account" like we do these days. This is maybe more how smartphones operating systems work than desktop computers?

                        • immibis 3 hours ago

                          In languages without ambient I/O capabilities, it's not as hard as it sounds if you're used to languages with them. Suppose the only way you can write a file is if I give you a handle to that file - then I know you aren't going to write any other files. Of course, main() receives a handle from the OS to do everything.

                          If I want you to decode a JPEG, I pass you an input stream handle and you return an output memory buffer; because I didn't give you any other capabilities I know you can't do anything else. Apart from looping forever, presumably.

                          It still requires substantial discipline because the easiest way to write anything in this hypothetical language is to pass the do-everything handle to every function.

                          See also the WUFFS project: https://github.com/google/wuffs - where things like I/O simply do not exist in the language, and therefore, any WUFFS library is trustworthy. However, it's not a general-purpose language - it's designed for file format parsers only.

                          • criemen 3 hours ago

                            Fair enough, it makes more sense in a, say, Haskell-style pure functional language. Instead of getting a general IO monad, you pass in more restricted functionality.

                            Still, it'd be highly painful. Would it be worth the trade-off to prevent supply chain attacks?

                        • cesarb 3 hours ago

                          > Instead of securing the "chain", we should instead isolate every library we import and run it under a sandbox.

                          Didn't we have something like that in Java more than a decade ago? IIRC, you could, for instance, restrict which classes could do things like opening a file or talking to the network.

                          It didn't quite work, and was abandoned. Turns out it's hard to sandbox a library; the exposed surface ended up being too large, and there were plenty of sandbox escapes.

                          > There are lots of native sandboxing in linux kernel. Bubblewrap, landlock, gvisor and kata (containers, not native), microVMs, namespaces (user, network), etc

                          What all of these have in common, is that they isolate processes, not libraries. If you could isolate each library in a separate process, without killing performance with IPC costs, you could use them; one example is desktop thumbnailers, which parse untrusted data, and can use sandboxes to protect against bugs in the image and video codec libraries they use.

                          • yupyupyups 3 hours ago

                            If there is a kernel level feature to throw sections of a process memory into other namespaces then yes, that may work. If you mean running a xen hypervisor for sqlite.so, then no thanks.

                            • warkdarrior an hour ago

                              One downside of fine-grained sandboxing is the overhead, as each (function) call into a library now has to cross sandbox boundaries.

                              Even if we assume overhead is magically brought to zero, the real challenge is customizing the permission policy for each sandbox. I add, say, 5 new dependencies to my program, and now I have to review source code of each of those dependencies and determine what permissions their corresponding sandboxes get. The library that connects to a database server? Maybe it also needs filesystem access to cache things. The library that parses JSON buffers? Maybe it also needs network access to download the appropriate JSON schema on the fly. The library that processes payments? Maybe it also needs access to location information to do risk analysis.

                              Are all developers able to define the right policies for every dependency?

                              • zzo38computer 10 minutes ago

                                I think using proxy capabilities would partially mitigate this issue, and effectively they would work by using interfaces rather than permissions. By itself, it will not do; documentation will also be helpful, as well as deciding what you will need for your specific application (which are helpful even independently of such things as this).

                            • coolThingsFirst 2 hours ago

                              Not a computer security expert by any stretch of the imagination but why is this difficult to solve, don't do antiviruses check if some program is accessing tokens or something? Maybe even OSes need to have default protection against this, Chrome browser history and tokens.

                              • alganet 3 hours ago

                                "Not made here syndrome" might actually not be a syndrome.

                                • teddyh 2 hours ago
                                  • thewebguyd 2 hours ago

                                    Agree. We took NIH too far.

                                    You don't need to pull in a library for every little function, that's how you open yourself up to supply chain risk.

                                    The left-pad fiasco, for example. Left-pad was 11 lines of code. Literally no reason to pull in an external dependency for that.

                                    Rust is doomed to repeat the same mistakes because it also has an incredibly minimal standard library, so now we get micro-crates for simple string utilities, or scopeguard which itself is under ~400 LoC, and a much simpler RAII can be made yourself for your own project if you don't need everything in scopeguard.

                                    The industry needs to stop being terrified of writing functionality that already exists elsewhere.

                                    • jacquesm 2 hours ago

                                      > The industry needs to stop being terrified of writing functionality that already exists elsewhere.

                                      I think that like everything else this is about balance. Dependencies appear to be zero cost whereas writing something small (even 400 lines of code) costs time and appears to have a larger cost than pulling in that dependency (and it's dependencies, and so on). That cost is there, it is just much better hidden and so people fall for it. If you knew the real cost you probably would not pull in that dependency.