• jandrewrogers 9 hours ago

    This is an issue that is ignored by just about everyone in practice. The reality is that most developers have subconsciously internalized the compiler behavior and assume that will always hold. And they are mostly right, I’ve only seen a few cases where this has caused a bug in real systems over my entire career. I try, to the extent possible, to always satisfy the requirements of strict aliasing when writing code. It is difficult to determine if I’ve been successful in this endeavor.

    Here is why I don’t blame the developers: writing fast, efficient systems code that satisfies the requirements of strict aliasing as defined by C/C++ is surprisingly difficult. It has taken me years to figure out the technically correct incantations for every weird edge case such that they always satisfy the requirements of strict aliasing. The code gymnastics in some cases are entirely unreasonable. In fairness, recent versions of C++ have been adding ways to express each of these cases directly, eliminating the need to use obtuse incantations. But we still have huge old code bases that assume compiler behavior, as was the practice for decades.

    I am not here to attribute blame, I think it the causes are pretty diffuse honestly. This is just a part of the systems world we failed to do well, and it impacts the code we write every day. I see strict aliasing violations in almost every code base I look at.

    • spacechild1 2 hours ago

      > In fairness, recent versions of C++ have been adding ways to express each of these cases directly, eliminating the need to use obtuse incantations.

      In particular, C++20 gave us std::bit_cast (https://en.cppreference.com/w/cpp/numeric/bit_cast) for type punning and C++23 added std::start_life_time_as (https://en.cppreference.com/w/cpp/memory/start_lifetime_as) for interpreting raw bytes as an object.

      • 082349872349872 6 hours ago

        One of the issues that worked against Euclid's adoption was that its compiler strictly disallowed aliasing. That said, https://dl.acm.org/doi/pdf/10.5555/800078.802513 claims that while they tended to write potentially-aliased code at first, after one made the Euclid compiler happy, subsequent development wasn't likely to reintroduce it.

        https://en.wikipedia.org/wiki/Euclid_(programming_language)

        • tialaramex 6 hours ago

          The insight in languages like Rust is that aliasing is actually fine if we can guarantee all the aliases are immutable and that's facilitated by default reference immutability. [These alias related] Bugs only arise when you have mutable aliasing which is why that doesn't exist in safe Rust.

          That paper also highlights that checking is crucial, their initial Euclid compiler just required that there's no aliasing, but never checked. So of course programmers will make mistakes and without the checks those mistakes leak into running code. The finished compiler checked, which means the mistake won't even compile.

          Shifting left in this way is huge, WUFFS shifts bounds misses left - when you write code which can have a bounds miss in C of course it just does have a bounds miss at runtime, there's a stray read or overwrite and chaos results maybe it's Remote Code Execution, in Rust the miss panics at runtime - maybe a Denial of Service or at least a major inconvenience. But in WUFFS it won't compile - you find out about your bug likely before it gets sent out for code review.

          Most software can't be written in WUFFS, but "most" is doing a lot of work there, plenty of code which should be in WUFFS or an analogous language is not, meaning mistakes are not shifted left.

        • gpderetta 6 hours ago

          Indeed, another problem is that we have no tools, other than very imperfect linters/compiler warnings, to identify aliasing violations. Even today I don't think sanitizers can catch most cases.

          • RossBencina 5 hours ago

            More often than not when I realise that I am violating strict aliasing it is because I am doing something that I want to do and the language is not going to let me. Much hand wringing, language lawyering and time wasting typically follows.

          • Arch-TK 3 hours ago

            > The reality is that most developers have subconsciously internalized the compiler behavior and assume that will always hold.

            I blame this on how people like to teach C and present C.

            It's very important that the second anyone conceives of the idea of learning C that they first off informed that trying things and seeing what happens is a highly unreliable method of learning how C programs behave and that C is not a high level assembly language.

            If you teach C in relation to the abstract machine instead of any real world machine you will understandably scare off most people. Which is good, since most people shouldn't be learning or writing C. It's a language which can barely be written correctly even by people with the necessary self discipline to only write code they're 100% certain is well defined.

            > It is difficult to determine if I’ve been successful in this endeavor.

            Why is your program so full of casts between pointer types that you have difficulty determining if you've avoided strict aliasing?

            Yes, if you treat C as a high level assembly language (like the linux kernel likes to do) then it becomes difficult to reason about the behaviour of your programs where 50% of them are in the grey area of uncertainty of whether they're well defined or not.

            If you are forced to write C in a non-learning context, don't write any line of code unless you're certain you could tell someone which parts of the standard describe its behaviour.

            > Here is why I don’t blame the developers: writing fast, efficient systems code that satisfies the requirements of strict aliasing as defined by C/C++ is surprisingly difficult.

            C/C++ isn't a language. So I will stick to C because I don't know nor care about C++.

            That being said, it's not hard to write efficient C which satisfies the requirements of strict aliasing except when you're dealing with idiotic APIs like bind or connect. Most code by default, assuming you use appropriate algorithms and data structures, is performant. The only time it becomes difficult with regards to strict aliasing is if you're micro optimizing.

            While non-trivial, the case of converting between unsigned long and float shown in the article is entirely possible to do with completely safe C constructs. Likewise serialization/deserialization of binary data never requires coming close to aliasing unless you're dealing with a "native" endian protocol. In the case of general serialisation and deserialisation, compilers will reliably optimise such operations into one or two instructions (depending on whether you're decoding same-endianness or not).

            • gpderetta 2 hours ago

              Well, it is more complicated than that.

              First of all compilers disagree on many interpretations and consequences of abstract machine rules. Also compilers have bugs.

              So a proficient C/C++ programmer does have to learn what compilers actually do in practice and what they guarantee beyond the standard (or how they differ from it).

              > C/C++ isn't a language.

              It isn't, but it is a family of languages that share a lot of syntax and semantics.

              • Arch-TK 2 hours ago

                > First of all compilers disagree on many interpretations and consequences of abstract machine rules.

                List them. I am not aware of any well defined parts of the C standard where GCC and Clang disagree in implementation. Only in areas where things are too vague (and are effectively either unspecified or undefined), or understandably in areas where they're "implementation defined".

                If there are behaviours where a compiler deviates from the standard it is either something you can configure (e.g. -ftrapv or -fwrapv) or it's a bug.

                > Also compilers have bugs.

                Nothing you do can defend against compiler bugs outside of extensively testing your results. If you determine that a compiler has a bug then the correct course of action is definitely not: "note it down and incorporate the understanding into your future programs"

                > So a proficient C/C++ programmer does have to learn what compilers actually do in practice and what they guarantee beyond the standard (or how they differ from it).

                There are situations where it's important to know what the compiler is doing. But these situations are limited to performance optimisation, the knowledge gained through these situations should only be applied to the single version of the compiler you observed it in, and you should not use the knowledge to feed back to your understanding of C or the implementation.

                It's almost impossible to decipher how modern C compilers work exactly and trying to determine what an implementation does based on the results of compilation is therefore extremely unreliable. If you need to rely on implementation defined behaviour (unavoidable in any real program) then you should be relying solely on documentation, and if the observed behaviour deviates from the documentation then that is, again, a bug bug.

                > It isn't, but it is a family of languages that share a lot of syntax and semantics.

                I am not a C/C++/C#/ObjectiveC/JavaScript/Java programmer.

                C++ and C might share a lot of syntax but that's basically where the similarities end in any modern implementation. People who know C thinking they know enough C to write reliable and conformant C++ and people who know C++ thinking they know enough C++ to write reliable and conformant C are one of the groups of people who produce the most subtle mistakes in these languages.

                I think you could get away with these kinds of things in the 80s but that has definitely not been the case for quite a while.

                • AlotOfReading an hour ago

                  > List them. I am not aware of any well defined parts of the C standard where GCC and Clang disagree in implementation.

                  Perhaps it's not "well defined" enough for you, but one example I've been stamping out recently is whether compilers will combine subexpressions across expression boundaries. For example, if you have z = x + y; a = b * z; will the compiler optimize across the semicolon to produce an fma? GCC does it aggressively, while Clang broadly will not (though it can happen in the LLVM backend).

          • icedchai an hour ago

            I learned C on the Amiga, back in the late 80's, and the OS made heavy use of "OO-ish" physical subtyping with structs everywhere. I don't think anybody even thought about strict aliasing violations.

            • flohofwoe an hour ago

              The Amiga C compilers most likely didn't do a lot of optimizations where strict aliasing would matter though (at least from what I remember it was pretty straight forward, a memory read or write in C typically resulted in a memory read or write in assembly).

              Basically, C code compiled to assembly in the Amiga era looked much more straightforward than the output produced by modern C compilers (with optimizations enabled at least), you could put both side by side and see a near 1:1 relationship between the C code and the assembly code (maybe also because the Motorola 68000 seems to have taken a lot of inspiration from the PDP instruction set).

              • ajross an hour ago

                Compilers in the 1980's really weren't sophisticated enough to have this problem. A function call was a hard barrier that was going to spill all GPRs, inlining was almost unheard of. What the code did was what you saw, and if you had an aliased pointer it's because that's what you wanted.

                And when it became an issue c. late 90's, it was actually "NO strict aliasing" that was the point of contention. Optimizers were suddenly able to do all sorts of magic, and compiler authors realized they were getting tripped up by the inability (c.f. the halting problem) to know for sure that this arbitrary pointer wasn't scribbling over the memory contents they were trying to optimize. You'd get better (often much better) code with -fno-strict-aliasing, which was tempting enough to turn it on and hope for better analysis tools to come along and save us from the resulting bugs.

                We're still waiting, alas.

              • sapiogram 3 hours ago

                Has anyone measured the performance impact of the -fno-strict-aliasing flag? How much real-world performance are we really gaining from all this mess?

                • sestep 12 minutes ago

                  Not sure if this is exactly the same scope as what you're asking about, but here's an ESSE '21 paper titled "The Impact of Undefined Behavior on Compiler Optimization": https://doi.org/10.1145/3501774.3501781

                  • twoodfin 2 hours ago

                    Obviously it’s going to vary from program to program. And you always have to be skeptical that removing the safety for performance hasn’t given you a faster but faulty program.

                    That being said, my intuition matches what little anecdotal data I’ve seen from real perf-sensitive systems, and I’d ballpark 10-15% where it matters.

                    • gpderetta 2 hours ago

                      Depends a lot on the application. For many it matters little, but for some (mostly numerical), it can matter a lot.

                      • 73kl4453dz 2 hours ago

                        Lack of aliasing was historically fortran's advantage over C.

                        • lmm 2 hours ago

                          Real-world performance: not enough to be measurable, certainly remotely enough to make up for the time we lose to debugging.

                          But no-one cares about real-world performance, people pick C and pick a C compiler because they want the thing that's fastest on artificial microbenchmarks.

                        • Gabriel54 3 hours ago

                          Forgive me my ignorance, but if I write

                            int foo(int *x) {
                              *x = 0;
                              // wait until another thread writes to *x
                              return *x;
                            }
                          
                          
                          Can the C compiler really optimize foo to always return 0? That seems extremely unintuitive to me.
                          • fweimer 3 hours ago

                            How do you accomplish the waiting operation? If it does not synchronize with the other thread, the compiler will optimize away the load. This isn't too surprising once you assume that not every *x in the source code will result in a memory access instruction. I would even say that most C programmers expect such basic optimizations to happen, although they might not always like the consequences.

                            • lmm 2 hours ago

                              > Can the C compiler really optimize foo to always return 0?

                              Yes

                              > That seems extremely unintuitive to me.

                              C compilers are extremely unintuitive. This is a relatively sane case, they do things that are much more surprising than this.

                              • marcosdumay 2 hours ago

                                That's the most straight-forward example of undefined behavior badness you'll find. Things on practice are usually way less intuitive than this (mostly because people notice and avoid writing those straight-forward problems).

                                • AlexandrB 3 hours ago

                                  In embedded this situation is quite common when x points to a hardware register. The typical solution is to declare x as volatile[1] which tells the compiler to omit these optimizations.

                                  It's very common for beginner embedded programmers to forget to do this and spend hours debugging why the register doesn't change when it should.

                                  [1] https://en.m.wikipedia.org/wiki/Volatile_(computer_programmi...

                                  • badmintonbaseba 2 hours ago

                                    In a multithreaded, hosted userspace program the wait operations should synchronize with another thread. This involves inserting optimization barriers that are understood by the compiler, therefore it can't optimize the this case to always return 0.

                                    • ajross 2 hours ago

                                      No, because in practice that "wait until" operation will act as a memory barrier. The obvious one is a function call. Functions are allowed to have side effects, one possible side effect is to change the value pointed to by an externally-received pointer.

                                      At lower levels, you might have something like an IPC primitive there, which would be protected by a spinlock or similar abstraction, the inline assembly for which will include a memory barrier.

                                      And even farther down still, the memory pointed to by "x" might be shared with another async context entirely and the "wait for" operation might be a delay loop waiting on external hardware to complete. In that case this code would be buggy and you should have declared the data volatile.

                                      • sapiogram 44 minutes ago

                                        > No, because in practice that "wait until" operation will act as a memory barrier.

                                        This is a wrong, a memory barrier would not salvage this code from UB. The read from `x` must at the very least be synchronized, and there might be other UB lurking as well.

                                    • RossBencina 5 hours ago

                                      Interesting that the article doesn't even entertain the obvious solution: remove strict aliasing requirements from the standards.

                                      • bluGill 2 hours ago

                                        Compiler writers tell me that it makes a big difference to optimization. I am careful to never cast anything in ways that there are problems and so I run with strict aliasing. My project started in 2010 though, so we had plenty of prior best practices to help us know better and no legacy code that is hard to refactor to make correct. We have had out share of memory issues, but never anything that could be blamed on strict aliasing.

                                        • zokier 4 hours ago

                                          on the other hand, it does say

                                          > If I were writing correctness-oriented C that relied on these casts I wouldn’t even consider building it without -fno-strict-aliasing.

                                          • Arch-TK 3 hours ago

                                            "correctness-oriented C" definitionally cannot consider "[relying] on those casts".