• DominikPeters 9 hours ago

    In this post, I don’t see any comparisons of their solver to other LP solvers on benchmarks, so it’s difficult to know how useful this is.

    • thxg 8 hours ago

      I think it's partially excusable. Most LP solvers target large-scale instances, but instances that still fit in RAM. Think single-digit millions of variables and constraints, maybe a billion nonzeros at most. PDLP is not designed for this type of instances and gets trounced by the best solvers at this game [1]: more than 15x slower (shifted geometric mean) while being 100x less accurate (1e-4 tolerances when other solvers work with 1e-6).

      PDLP is targeted at instances for which factorizations won't fit in memory. I think their idea for now is to give acceptable solutions for gigantic instances when other solvers crash.

      [1] https://plato.asu.edu/ftp/lpfeas.html

      • sitkack 7 hours ago

        Hans D Mittlemann, you are my top dude when it comes to web design. A salute your aesthetic.

        [1] https://plato.asu.edu/

        • whatever1 6 hours ago

          FYI the table does not include the commercial top dogs (ibm cplex, gurobi, Fico Xpress), since due to politics they all withdrew

          • thxg 5 hours ago

            Indeed those are the "big four" solver businesses in the West, and also probably the most reliably-good solvers. But by the time Gurobi withdrew from the benchmarks (a few weeks ago), COpt was handily beating them in the LP benchmarks, and closing down on them in MIP benchmarks. Solver devs like to accuse each other of gaming benchmarks, but I'm not convinced anyone is outright cheating right now. Plus, all solver companies have poached quite a bit from each other since cplex lost all its devs, which probably equalizes the playing field. So overall, I think Mittelmann benchmarks still provide a good rough estimate of where the SOTA is.

      • dsfcxv 8 hours ago

        Their numerical results with GPUs, compared to Gurobi, are quite impressive [1]. In my opinion (unless I'm missing something), the key benefits of their algorithms lie in the ability to leverage GPUs and the fact that there’s no need to store factorization in memory. However, if the goal is to solve a small problem on a CPU that fits comfortably in memory, there may be no need to use this approach.

        [1] https://arxiv.org/pdf/2311.12180

        • thxg 7 hours ago

          I agree that their results are impressive. Just to be clear, however:

          1. They compare their solver with a 1e-4 error tolerance to Gurobi with 1e-6. This may seem like a detail, but in the context of how typical LPs are formulated, this is a big difference. They have to do things this way because their solver simply isn't able to reach better accuracy (meanwhile, you can ask Gurobi for 1e-9, and it will happily comply in most cases).

          2. They disable presolve, which is 100% reasonable in a scientific paper (makes things more reproducible, gives a better idea of what the solver actually does). If you look at their results to evaluate which solver you should use, though, the results will be misleading, because presolve is a huge part of what makes SOTA solvers fast.

          • dsfcxv 7 hours ago

            hmm... I am reading [1] right now. When looking at their Table 7 and Table 11 in [1], they report comparison results with Gurobi presolve enabled and 1e-8 error. Do I miss anything?

            Their performance isn't quite as good as Gurobi's barrier method, but it's still within a reasonable factor, which is impressive.

            • thxg 5 hours ago

              Regarding presolve: When they test their solver "with presolve", they use Gurobi's presolve as a preprocessing step, then run their solver on the output. To be clear, this is perfectly fair, but from the perspective of "can I switch over from the solver I'm currently using", this is a big caveat.

              They indeed report being 5x slower than Gurobi at 1e-8 precision on Mittelmann instances, which is great. Then again, Mittelmann himself reports them as 15x off COpt, even when allowed to do 1e-4. This is perfectly explainable (COpt is great at benchmarks; there is the presolve issue above; the Mittelmann instance set is a moving target), but I would regard the latter number as more useful from a practitioner's perspective.

              This is not to diminish PDLP's usefulness. If you have a huge instance, it may be your only option!

        • sfpotter 9 hours ago

          They link to three of their papers that have more details.

      • bee_rider 11 hours ago

        I often see the sentiment, essentially, a method is much better because it uses only matvecs (rather than factorize and solve). This always confuses me, because the game for numerical linear algebra folks is inventing funky preconditioners to fit their problem, right? Unless people are subtly saying “our new method converges incredibly quickly…”

        ILU0 is practically free, right?

        • sfpotter 9 hours ago

          There are tons of different games to play. Designing a preconditioner that's specific to the problem being solved can help you beat incomplete LU, often by a substantial (even asymptotic) margin.

          If you have a problem that's small enough to factorize and solve, that's great. That probably is the best approach. This doesn't scale in parallel. For really big problems, iterative methods are the only game in town.

          It's all about knowing the range of methods that are applicable to your problem and the regimes in which they operate best. There's no one-size-fits-all solution.

          • bee_rider 8 hours ago

            I agree, I was just using ilu as sort of a well known example. Also, because ILU0 has no fill-in, it should (IMO) be considered the “baseline” in the sense that not trying it should be justified somehow.

          • pkhuong 11 hours ago

            Preconditioning would only apply to approaches like interior point methods, where the condition number quickly approaches infinity.

            • bee_rider 9 hours ago

              I wonder if they put any matrices in the suitesparse collection, or anything like that. It would be a nice fun weekend project to just play around with them.

          • raidicy 11 hours ago

            Slightly off topic but does anybody have any resources for learning linear programming for business applications?

            • cashsterling 10 hours ago

              I'd recommend getting the 9th or 10th edition of Introduction to Operations Research by Hillier and Lieberman. 9th: https://www.amazon.com/dp/0077298349 You can search for the 10th edition. Both are available used for less than 50 USD in good condition. The book covers a lot more than linear programming. A solution manual for both editions can be found on the internet.

              A good "free-pdf" optimization book, to support the above is, Algorithms for Optimization by Kochenderfer & Wheeler ( https://algorithmsbook.com/optimization/ ). It has a chapter on constrained linear optimization with Julia code and is a good secondary resource. Kochenderfer, Wheeler, and colleagues also have two other free optimization books that are a little more advanced. It is exceptionally cool that they make the high quality PDF freely available; more authors in the technical space are making their books freely available as pdf and I applaud them for it.

              • kristopolous 44 minutes ago

                What do people on here use this stuff for? Are you building out large IT infrastructures? Sorry, this is all very new stuff to me.

                • bookofjoe 10 hours ago
                  • syockit 29 minutes ago

                    I'd say to be wary of sharing links to full textbooks unless you are really sure it's posted by the copyright owner. The authors could be best friends with the webmaster and allowed them to post it on their website, but the problem is with the publisher who may not have given consent.

                • armanboyaci 10 hours ago

                  I recommend this blog: https://yetanothermathprogrammingconsultant.blogspot.com/?m=...

                  Not all posts are business related but you can learn many practical tricks hard to find in books.

                  • pjot 6 hours ago

                    GAMS is such a wild language/environment.

                    I don’t know of anything better, but I’m currently reliving nightmares from my Masters

                    • currymj 5 hours ago

                      JuMP is comparably good I think. People reasonably don’t want to add a new language to their stack. But if you’re just formulating MPs it’s as nice as anything, free, and you have a well designed modern programming language if you need it.

                    • tomas789 8 hours ago

                      I add my +1 to this. I often come across this blog posts while working as a OR professional.

                    • nickpeterson 10 hours ago

                      I really wish I could find solid websites/blogs/resources for operations research, mathematical planning, linear programming, etc aimed at regular software engineers. I feel like a lot of the really crazy parts of codebases often derive from inexperience with these kinds of tools.

                      • polivier 8 hours ago

                        I write blog posts about constraint programming from time to time. I always include a step-by-step description of the model, which makes it fairly easy to understand. Hopefully this can be of help for you: https://pedtsr.ca

                        • colelyman 8 hours ago

                          Have you seen http://www.hakank.org/ ? Mostly about constraint programming, but definitely in the realm of operations research.

                        • chris_nielsen 9 hours ago

                          Applied Mathematical Programming https://web.mit.edu/15.053/www/AMP.htm

                          • l33t7332273 8 hours ago

                            I second this blog. The comparison of (10 different!) MIP modeling techniques for piecewise linear functions is more complete than any I’ve seen.

                          • nuclearnice3 10 hours ago

                            If you can get your hands on informs publications.

                            https://www.informs.org/Publications

                            • currymj 10 hours ago

                              the Mosek Modeling Cookbook is really good for seeing the "tricks" of how to reformulate things as convex programs.

                              https://docs.mosek.com/modeling-cookbook/index.html

                              Not for total beginners though but great 201 level resource.

                              • ant6n 5 hours ago

                                Isnt this something that could be useful for consulting? I’ve occasionally considered trying to help businesses model MILPs to help solve their problems, but its so specialist that finding good matches is like the actual problem. I wonder how specialists like milp experts find clients.

                                • jeffbee 10 hours ago

                                  You could just grab or-tools and work through their example problems, then extend it to something in your area of interest. The Python APIs are easy to experiment with.