"Foundational AI companies love this one trick"
It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)
It's be nice if I could solve any problem by speccing it out in its entirety and then just implement. In reality, I have to iterate and course correct, as do agentic flows. You're right that the AI labs love it though, iterating like that is expensive.
The last commit is from April 2023, should this post maybe have a (2023) tag? Two years is eons in this space.
Crazy that OpenAI only launched o1 in September 2024. Some of these ideas have been swirling for a while but it feels like we're in a special moment where they're getting turned into products.
The bigger picture goal here is to explore using prompts to generate new prompts
I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.
^
Excellent fun. Now just to create a prompt to show iterated LLMs are turing complete.
LLM quine when?
Repeat this sentence exactly.
https://chatgpt.com/share/680567e5-ea94-800d-83fe-ae24ec0045...
I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.