Big takeaway for me: the win isn’t better prompts, it’s semantic guarantees. By proving at the bytecode level that the pixel loop is side-effect-free, you can safely split it into long-lived workers and use an order-preserving queue. It's an aggressive transform copilots won’t attempt because they can’t verify invariants. That difference in guarantees (deterministic analysis vs. probabilistic suggestion) explains the 2× gap more than anything else.
Yes exactly and that was the hard part (extract and verify the invariants). Still it's surprising because llm needs to be able to do that for any complex code.
What you wrote is great can I copy/paste it in the blog post? (Referring you of course)
For sure. Feel free to copy/paste it. Great blog by the way. Will keep any eye out more of your posts.
Anything built to purpose (by a competent dev) will usually beat out a general purpose tool. I remember burntsushi being surprised that my purpose-built unicode segmentation code so dramatically outperformed the unicode segmentation he had in bytestring which was based on regular expressions, but personally I would be surprised if it were any different.
Do you have a link to my surprised? I would be surprised if I were surprised by a purpose built thing beating something more general purposed. :P
It was on reddit. Maybe I misremembered your reaction.
I'm totally not surprised by this. It would be strange if, at this point, we couldn't find anything that a specialized tool could do better.
But rest assured that the LLM folks are watching, and learning from this, so the issue will probably be resolved in the next version. Of course without thanking/crediting the author of the article.
Isn't the original reason for LLMs, language translation, the classic example where LLMs handily beat out bespoke translation tools?
I have breach parser that i had written to parse through over 3 billion rows of compressed data (by parsing i simply mean searching for a particular substring), I’ve tried multiple LLMs to make it faster (currently it does so in <45 seconds on an M3 pro mac) none have been able to do that yet.
https://github.com/44za12/breach-parse-rs
Feel free to drop ideas if any.
For simple string search (i.e., not regular expressions) ripgrep is quite fast. I just generated a simple 20 GB file with 10 random words per line (from /usr/share/dict/words). `rg --count-matches funny` takes about 6 seconds on my M2 Pro. Compressing it using `zstd -0` and then searching with `zstdcat lines_with_words.txt.zstd | rg --count-matches funny` takes about 25 seconds. Both timings start with the file not cached in memory.
Tried that it’s taking exactly as much time as my program.
I would start by figuring out where there is room for improvement. Experiments to do:
- how long does it take to just iterate over all bytes in the file?
- how long does it take to decompress the file and iterate over all bytes in the file?
To ensure the compiler doesn’t outsmart you, you may have to do something with the data read. Maybe XOR all 64-bit longs in the data and print the result?
You don’t mention file size but I guess the first takes significantly less time than 45 seconds, and the second about 45 seconds. If so, any gains should be sought in improving the decompression.
Other tests that can help locate the bottleneck are possible. For example, instead of processing a huge N megabyte file once, you may process a 1 MB file N times, removing disk speed from the equation.
What about AlphaEvolve / OpenEvolve https://github.com/codelion/openevolve? It has a more structured way of improving / evolving code, as long as you setup the correct evaluator.
It's a great idea but yeah the evaluator (especially in this case) seems hard to build. I'll think about this because it's a great idea
I have an older breach data set that I loaded into clickhouse:
SELECT *
FROM passwords
WHERE (password LIKE '%password%') AND (password LIKE '%123456%')
ORDER BY user ASC
INTO OUTFILE '/tmp/res.txt'
Query id: 9cafdd86-2258-47b2-9ba3-2c59069d7b85
12209 rows in set. Elapsed: 2.401 sec. Processed 1.40 billion rows, 25.24 GB (583.02 million rows/s., 10.51 GB/s.)
Peak memory usage: 62.99 MiB.And this is on a Xeon W-2265 from 2020.
If you don't want to use clickhouse you could try duckdb or datafusion (which is also rust).
In general, the way I'd make your program faster is to not read the data line by line... You probably want to do something like read much bigger chunks, ensure they are still on a line boundary, then search those larger chunks for your strings. Or look into using mmap and search for your strings without even reading the files.
Regex.
You can't just tell LLM "make it faster, no mistakes or else". You may need to nudge it to use specific techniques (good idea to ask it first what techniques it is aware of), then give it comparison before and after, maybe with assembly. You can even get assembly output to another LLM session and ask it to count cycles, then feed the result to another session. You can also look yourself what seems excessive, consult CPU datasheets and nudge LLM to work on that area. This workflow isn't much faster than just optimising by hand, but if you are bored with typing code is a bit refreshing. Like you can focus on "high level" and LLM does the rest.
>You can't just tell LLM "make it faster, no mistakes or else".
Just told the LLM to create a GUI in visual basic. I am a hacker now.
Whew compilers are still better than LLMs.
This is cool. I wonder if your VM could work in conjunction with an LLM? Have you tried making this optimizer available as an MCP, or maybe some of the calculated invariants could be exposed as well?
Yes I did and it solves tons of problems of coding agents
It is very likely that LLM will be able to plagiarize https://ispc.github.io/example.html and steal ready to use optimal code for Mandelbrot, while specialized optimizers are locked within a domain. Not even speaking of the fact, that author is producing graphics: the task should be solved on the GPU in the first place.
I certainly expect a human to do better here but if you wanna show it, giving a one line prompt to 2nd best LLMs to one-shot it isn't really the way to do it. Use Opus and o3, and give it to an agent that can measure things and try more than once.
Great idea. Which agent to use?
I tried with opus and o3 but I had to copy/paste the code and I wasn't sure it was the best way.
I tried 10 prompts and the simplest was the best (probably due to the code being simplistic)
Also it wasn't done by a human but by my tool (the code in the repo is decompiled bytecode)
After reading another comment I'm not sure my suggestion is any good, it may not test looking at code and improving it and instead test "writing optimized mandlebrot in java" which it has probably seen some great examples of.
This matches my experience with AI agents. Wiring up the correct feedback and paying attention to ensure they use it is important. Tests and linters are great, but there's usually much more that human devs look at for feedback, including perceived speed and efficiency.
It's unsurprising to me that the author got this outcome. However, instead of just prompting to optimize the code, I suspect they would have gotten much stronger results from the models if they'd prompted them to write an optimizer.
Author used copilot in rider, tbh it's one of the worst rider. Which llm model was used ? VSCode&VS copilot allow you to select it.
I suspect theses bench were run on the default model, ChatGPT 4o, which is now more than a year old.
Oh good point which one should I use? I ran it alos with o3 and claude sonnet but the results were similar or worst.(Some are in the repo)
hopefully i am not sounding too pedantic in mentioning this. But LLMS are still deterministic if you're using the same prompt and seed , temp , (sometimes requires the same hardware even) etc.
Are they? AFAIK, the “etc” includes using hardware that produces the same results for a given input every time. Once you start to multi-thread/multi-process in combination with floating point math, that can be hard to accomplish.
For example, the result of summing a stream of floats depends on the order the floats arrive in, and that order can change depending on what’s in your CPU cache when you start a computation, on whether something else running on your system such as a timer interrupt evicts something from cache during a computation, etc.
If you’re running on your GPU, even if the behavior of your GPU is 100% predictable (I wouldn’t know of that’s true on modern hardware, but my guess is it isn’t) anything that also uses the GPU can change things.
Yes but the analysis it's going to run is probabilist vs the one I'm running is closer to smt
Wouldn't the real test be to run all the code through the bytecode optimizer and see who's faster then?
I mean, this is sort of the same as testing the LLM output against the -O3 compiler optimization flag while compiling their programs with no optimizations. Actually, if I read TFA correctly, this is exactly what they're doing, am I wrong?
Or maybe I am wrong and they're testing their VM against compiled code, dunno?
Good point I should have mentioned. I was so "in it" that I didn't realize it was a valid question.
This thing outperforms llvm (on o3 and O2) and the JVM too. I didn't explain it because I wrote it in a previous blog post but that's a fair point.
And it's not that obvious because the previous example was handled better by chat gpt while llvm couldn't handle it.
Thank you for this idea.