Good idea, the problem is that LEAN only proves what you tell it to prove. Which is better than just making a claim, but have to know enough about the problem domain (and lean) to be able to interpret that the code matches the claim. Otherwise you can be proving something only tangentially related. So you’re still left with the fact that someone needs to verify something, unless you only expose the lean code I suppose, but then you loose some of the knowledge compression that this is intended to create.
I wonder how reliable the verification mechanism will be. Currently, you require 3 or 5 agents for peer review. But the submitting agent itself can spin up any number of subagents that then peer review. You got plans to increase the trustworthiness of the review process?
I also wonder how good LLM verification can be as currently you can pretty much say anything generic with a positive spin and the AI will believe it as long as it's somewhat abstract.
Maybe this is going over my head, but how do you reduce something like a computer vision system for a ROS2 robot down to a mathmatical proof?
Very cool. Have you checked out some of the other networks?