• maaaaattttt 4 hours ago

    That’s a name that won’t go unnoticed by the german speaking colleagues.

    • haustlauf 2 hours ago

      As well as the Dutch ones.

      • starbugs 43 minutes ago

        Please explain for international audiences :)

        • j7c6 2 hours ago

          I just pitched "Product Information Management Machine Learning" to my team lead ;) Which, in my opinion, is largely underestimated in that field of software... Maybe we should consider a different name though

        • zwaps 4 hours ago

          Am i blind? Where is the code?

          • dagw 3 hours ago

            Looks like they haven't actually published the source code, despite both GitHub and PyPI claiming the project is apache licensed. If you install the package all you get in the wheel is precompiled cython libraries and the absolute minimum python code needed to import them.

            • mrks_hy an hour ago

              Given the broader state of the Python ecosystem, that is a red flag for me. Why would you add an OSS license but withhold the sourcecode?

              The only reason I can think of is to "trick" people into downloading and betting that no-one actually checks the source, then hiding something in the .so files that are shipped as wheels. Tread carefully.

              • Labo333 an hour ago

                Indeed, this is a major blocker. I am very wary of installing it and would never ship this to production.

          • melenaboija 5 hours ago

            Is the source code available?

            • frakt0x90 5 hours ago

              Very Happy to see FIGS on the list. I almost got a chance to use it on a recent project but the csutomer in the end decided they valued accuracy over interpretability. I wonder if the recent KANs fit into this? They're more interpretable than other NN architectures. Also, Berkeley has had a few articles on interpretable methods (including FIGS) over the years but idk how they compare to what you've already implemented:

              https://bair.berkeley.edu/blog/2022/02/02/imodels/

              https://bair.berkeley.edu/blog/2020/04/23/decisions/

              • abhgh 3 hours ago

                I have used a successor [1] of FIGS and my experience was that although the theory is elegant, their benchmarking was incomplete.This is something I would look out for if you are planning to use it on a real-world problem. Again, this is not to be negative about the paper, but there is a gap between the theory and practice.

                I had raised this as an issue [2] on their repo, but after a few exchanges didn't receive any response. If you look at the last comment on that issue thread, you will see that Random Forest with proper hyperparam. search produces competitive results.

                [1] https://proceedings.mlr.press/v162/agarwal22b.html

                [2] https://github.com/csinva/imodels/issues/129

                • 3abiton 5 hours ago

                  QRD on FIGS for the curious ones?

                  • frakt0x90 5 hours ago

                    This explains it better than I could:

                    https://bair.berkeley.edu/blog/2022/06/30/figs/

                    • bravura 4 hours ago

                      Pretty cool. So it greedily constructs an ensemble of trees from scratch. At each step, it takes the one decision that reduces the loss the most. That's a nice approximation for the most compact tree ensemble.

                • pragma_x 5 hours ago

                  I misread that as "impenetrable" machine learning and thought: "Well, it's about time someone admitted it."

                  Ironically, the low-code examples do a good job of making this space a little more approachable.

                  • setopt 6 hours ago

                    PiML is also a common abbreviation for Physics-informed Machine Learning.

                    • jszymborski 3 hours ago

                      TIL. I've only ever heard the term PINN (physics-informed neural networks) before.