That’s a name that won’t go unnoticed by the german speaking colleagues.
As well as the Dutch ones.
Please explain for international audiences :)
I just pitched "Product Information Management Machine Learning" to my team lead ;) Which, in my opinion, is largely underestimated in that field of software... Maybe we should consider a different name though
Am i blind? Where is the code?
Looks like they haven't actually published the source code, despite both GitHub and PyPI claiming the project is apache licensed. If you install the package all you get in the wheel is precompiled cython libraries and the absolute minimum python code needed to import them.
Given the broader state of the Python ecosystem, that is a red flag for me. Why would you add an OSS license but withhold the sourcecode?
The only reason I can think of is to "trick" people into downloading and betting that no-one actually checks the source, then hiding something in the .so files that are shipped as wheels. Tread carefully.
Indeed, this is a major blocker. I am very wary of installing it and would never ship this to production.
Is the source code available?
Very Happy to see FIGS on the list. I almost got a chance to use it on a recent project but the csutomer in the end decided they valued accuracy over interpretability. I wonder if the recent KANs fit into this? They're more interpretable than other NN architectures. Also, Berkeley has had a few articles on interpretable methods (including FIGS) over the years but idk how they compare to what you've already implemented:
I have used a successor [1] of FIGS and my experience was that although the theory is elegant, their benchmarking was incomplete.This is something I would look out for if you are planning to use it on a real-world problem. Again, this is not to be negative about the paper, but there is a gap between the theory and practice.
I had raised this as an issue [2] on their repo, but after a few exchanges didn't receive any response. If you look at the last comment on that issue thread, you will see that Random Forest with proper hyperparam. search produces competitive results.
QRD on FIGS for the curious ones?
This explains it better than I could:
Pretty cool. So it greedily constructs an ensemble of trees from scratch. At each step, it takes the one decision that reduces the loss the most. That's a nice approximation for the most compact tree ensemble.
I misread that as "impenetrable" machine learning and thought: "Well, it's about time someone admitted it."
Ironically, the low-code examples do a good job of making this space a little more approachable.
PiML is also a common abbreviation for Physics-informed Machine Learning.
TIL. I've only ever heard the term PINN (physics-informed neural networks) before.