• rerdavies 30 minutes ago

    Just spent three days of debugging hell getting my app to shut down gracefully, so that it gracefully turns off all the things that it asynchronously turned on without performing use-after deletes). I can sympathise with that.

    • MontagFTB 3 hours ago

      I consider Tracy the state of the art for profiling C++ applications. It’s straightforward to integrate, toggle, gather data, analyze, and respond. It’s also open source, but rivals any product you’d have to pay for:

      https://github.com/wolfpld/tracy

      • rerdavies 28 minutes ago

        Alas, not for Linux. I've been using the unloved and mostly abandoned (and mostly awful) google perf tools on Linux. :-(

        • Veserv 2 hours ago

          Looks fine, but it does not look like there is a automatic full function entry/exit trace, just sampling. The real benefit is when you do not even need to insert manual instrumentation points, you just hit run and you get a full system trace.

          How well does the visualizer handle multi-TB traces? Usually pretty uncommon, but a 10-100 GB is not that hard to produce when doing full tracing.

          • jms55 an hour ago

            Of note is that tracy is aimed at games, where sampling is often too expensive and not fine-grained enough. Hence the manual instrumenting.

            For the Bevy game engine, we automatically insert tracy spans for each ECS system. In practice, users can just compile with the tracy feature enabled, and get a rough but very usable overview of which part of their game is taking a long time on the CPU.

            • Veserv an hour ago

              I was talking about automatic instrumentation of every single function call by default. No manual instrumentation needed because everything is already instrumented.

              To be fair, you do still want some manual instrumentation to correlate higher level things, but full trace everywhere answers most questions. You also want to be able to manually suppress calls for small functions since that can be performance relevant or distorting, but the point is “default on, manual off” over “default off, manual on”.

        • loeg 4 hours ago

          But what was the shutdown bug you were trying to identify? Was this destructor logging actually useful? The article teases the problem and provides detailed instructions for reproducing the logging, but doesn't actually describe solving the problem.

          • jprete 5 hours ago

            Address/MemorySanitizer are also meant for this kind of problem. https://github.com/google/sanitizers/wiki/AddressSanitizer https://github.com/google/sanitizers/wiki/MemorySanitizer

            Also valgrind, but I'm more familiar with the first two.

            • rqtwteye 3 hours ago

              I did this a long time ago with macros. It helped me to find a ton of leaks in a huge video codec codebase.

              I still don't understand the hate for the C preprocessor. It enables doing this like this without any overhead. Set a flag and you get constructor/destructor logging and whatever else you want. Don't set it and you get the regular behavior. Zero overhead.

              • jonathrg 3 hours ago

                The hate might have to do with it being such a primitive and blunt tool; doing anything moderately complex becomes extremely complicated and fragile.

                • tialaramex 3 hours ago

                  Yeah, this very primitive tool easily creates the programming equivalent of the "iwizard problem".

                  [You replace straight forwardly "mage" with "wizard" and oops, now your images are "iwizards" and your "magenta" is "wizardnta"]

                • synergy20 2 hours ago

                  do you have a write-up how you did it? I'm interested, thanks.

                • neverartful 5 hours ago

                  I did something similar once but my implementation didn't rely on any compiler features. I made tracing macros for constructors, destructors, and regular c++ methods. If the tracing was turned on in the macros, the information given to the macro (class name, method name, etc.) would be passed to the tracing manager. The tracing manager would serialize to a string and send it through a TCP socket. I also wrote a GUI tracing monitor that would listen on a socket for tracing messages and then display the trace messages received (including counts by class and method). The tracing monitor had filters to tweak. It was a nice tool to have and was very instrumental in finding memory leaks and obscure crashes. This was back in the late 1990s or early 2000s.

                  • meindnoch 5 hours ago

                    And what was the bug in the end?

                    • meisel 3 hours ago

                      I’d say address sanitizer is a better starting point, and likely to show memory issues faster than this