> "A week of back and forth, 2.3 billion tokens, $2,283 in API costs, and about ~20 hours of me unsticking it from dead ends. It popped calc."
Corrent me if I'm wrong, I'm not a security researcher, but 20 hours, a week of work, 2283$ spent and over 2 trillion tokens, is not very 10x-ing as we were promised. Especially if you take into account that the guy is at least half capable for this take.
I dunno
Chrome exploits (obviously that can be used to compromise people) go for $1,000,000 on the black market so anything cheaper than that to generate is impressive.
This was using an exploit already fixed in a recent version and publicly known. It's worthless on the black market or as a bug bounty.
This has been what I’ve been screaming from the rooftops for a while, that these models can already do this.
Go read the devs actual blog though. This is more a statement on patch lag than anything else. In my mind that’s much more important than “zomg zero days!!!”
A security researcher instructed an LLM to write an exploit for a know bug fixed in an already published release
Not really impressive
I know most people here hate that, but I think this makes a much stronger case for security by obscurity (not releasing the source code) in these changing times.
Of course security by obscurity by itself is by no mean sufficient.
How?
In the 90's most software was closed source but cracks/trainer were always available.
Even for Rayman that had multiple (26?) cd-check during the game.
Security is mainly slowing the attacker because there's a maximum amount of stuff a human can do in 24hours. But now if you can simulate thousands of human attacking a system in different ways, it will crack.
Just like many stores have lock on their doors and, insurance if someone breaks the lock.
I'm guessing data security insurance will become a huge market in the years to come.
Aren't we in agreement then? Taking your lock analogy again, people don't put locks on their bikes because they protect them completely, but because they slow down someone who wants to steal them. Given enough resources everything will be cracked, it doesn't mean that making it harder is useless. People cracking games in the 90's may not have had the source code but they had the machine code and knew what to look for and where.
I think part of the concern is that it turns into truly unmaintainable arms that might evolve in some unpredictable ways with potential branches like:
- a lot of open source goes closed source to increase security - open source is effectively forced to use LLM to keep up
I am not really arguing against it, because I understand the arguments on both ends and I am not sure what a good solution here is.
This is assuming that project owners and good actors won't also be using LLM tools to protect open code.
Open does not mean vulnerable, open simply means it's a more obvious cat-and-mouse game.
I absolutely assume that project owners will use LLM tools to protect themselves, but it seems like it whoever spends more will find more security issues. And potentially a malicious actor could decide to spend more tokens on one specific part of the program, while the owner has to protect everything. I think with open source the idea is that there are more eyes looking at the potential problems, and more of those eyes are benevolent, but LLM change that as it's not about the number of people but whoever is ready to spend the most resources.