Prof. Cunxi Yu and his students at UMD is working on this exact topic and published a paper on agents for improving SAT solvers [1].
I believe they are extending this idea to EDA / chip design tools and algorithms which are also computationally challenging to solve. They have an accepted paper on this for logic synthesis which will come out soon.
[1] "Autonomous Code Evolution Meets NP-Completeness", https://arxiv.org/abs/2509.07367
nice. EDA indeed one of the top applications of SAT
It should be noted that MaxSAT 2024 did not include z3, as with many competitions. It’s possible (I’d argue likely) that the agent picked up on techniques from Z3 or some other non-competing solver, rather than actually discovering some novel approach.
Z3 is capable (it’s an SMT solver, not just SAT), but it’s not very fast at boolean satifiability and not at all competitive with modern SOTA SAT solvers. Try comparing it to Chaff or Glucose e.g.
Or for that matter even from later versions of the same solvers that were in its training data!
True. I’d be curious whether a combination of matching comp/training cutoff and censoring web searches could yield a more precise evaluation.
as its from 2024 (MaxSAT was not held in 2025), its quite likely all the solvers are in the training data. so the interesting part here is the instances for which we actually got better costs that what is currently known (in the best-cost.csv) file.
As GP noted the issue is that even better versions than competed in MaxSAT are likely in the training data or web resources.
Is z3 competitive in SAT competitions? My impression was that it is popular due to the theories, the python API, and the level of support from MSR.
Funnily, this was precisely the question I had after posting this (and the topic of an LLM disagreement discussed in another thread). Turns out not, but sibling comment is another confounding factor.
One problem here is it's very easy to overtune to a past problem set -- even accidentally. You can often significantly improve performance just by changing your random number generator seed until you happen to pick the right assignment for the first few variables of some of the harder problems.
It would be interesting to take the resulting solver and apply it to an unknown data set.
Not as many changes to the files under library as I expected to see. Most changes seemed to be under a single ‘add stuff’ commit. If some of the solvers are randomised, then repeatedly running and recording best solution found will continually improve over time and give the illusion of the agent making algorithmic advancements, won’t it?
yeh. ofc. but on any problem larger than 40 variables, the gains from random restarts or initializations will quickly plateau
and it would take an algo change to the solver to jump to the next local optimum
I guess my point was that I don't see many algo changes in the commit history, which is a shame if this has been lost; library/* files are largely unchanged from the initial commits. But each time the agent runs, it has access to the best solutions found so far and can start from there, often using randomisation, which the agent claims helps it escape local minima e.g. 'simulated annealing as a universal improver'. It would be nice to see how its learnt knowledge performs when applied to unseen problems in a restricted timeframe.
What counts as “our cost”? How long it takes to find the MaxSAT?
the sum of the weights of the unsatistied clauses. we want to reduce this number
fine tuning a small model usually beats prompting a large one for specific tasks imo
sounds like AlphaDev [1] might be a better approach for a problem like this.
somewhat
Would me be nice to try this on lcg (CP-SAT) solvers
we've been running something similar in prod. latency is the real bottleneck not accuracy
anyone else finding that agent architectures are way more expensive than expected?
wrt. token usage?
interesting results but the eval methodology seems a bit optimistic
its just comparing the cost of the best solution found to the best known cost we had before. O(N). why optimistic?
If you have showdead on, you can see that this account posts generic oneliners: https://news.ycombinator.com/threads?id=balinha_8864
Is that bad?
It's an indication that it's one of the many bot accounts currently doing the same thing https://hn.algolia.com/?query=this%20is%20more%20nuanced%20t...
So the reason the comment appears weirdly disconnected from the content of the article is that it was generated independently from the content of the article.