This has big "let's do this because we can" energy.
What is the BENEFIT of all this?
Let's use Blockchain instead of a database - because we can.
Let's create a maze of microservices - because we can.
Let's make every function a lambda function - because we can.
Let's make AI write code, run it, verify it, fix it, then run it again - because we can.
Let's burn untold amounts of energy to do simple things - because we can.
Because we can? More like because I have equity in a company that sells this stuff.
To you, what's the point of spending countless billions on space exploration?
You can make that argument about every single thing that is wasteful but can be justified as "research".
Sure, every bit of f--ing around is research, but ROI is far from constant.
Discretion will be the better part of the tech industry, if we ever reach that maturity level.
Haven’t we been seeing libraries that implement this pattern going on two years now? Take the docstring and monkey patch the function with llm generated code, with optional caching against an AST hash key.
The reason it hasn’t take off is that it’s a supremely bad and unmaintable idea. It also just doesn’t work very well because the LLM doesn’t have access to the rest of the codebase without an agentic loop to ground it.
The real reason its bad is because its not really easier to be more productive doing this:
> You write a Python function with a natural language specification instead of implementation code. You attach post-conditions – plain Python assertions that define what correct output looks like.
Vs
> You write a Python function with ~~a natural language specification instead of~~ implementation code.
In many cases.
I'd like to see this with a proper local "instruction cache."
It might even be fun that the first call generates python (or other langauge), and then subsequent calls go through it. This "otpimized" or "compiled" natural langauge is "LLMJitted" into python. With interesting tooling, you could then click on the implementation and see the generated cod, a bit like looking at the generated asssembly. Usually you'd just write in some hybrid pytnon + natural language, but have the ability to look deeper.
I can also imagine some additional tooling that keeps track of good implementations of ideas that have been validated. This could extend to the community. Package manager. Through in TRL + web of tust and... this could be wild.
Really tricky functions that the LLM can't solve could be delegated back for human implementation.
the jit angle is actually the most principled framing here -- generate once, cache the compiled artifact, treat it like any other build output. the problem with the naive "call LLM every time" version isn't just cost, it's that you lose referential transparency. same function signature, different behavior on tuesday vs wednesday when the model updates. at least a jit'd artifact is reproducible within a build.
Nice! I can almost see your vision. In terms of tooling, I think this could be integrated with deep instrumentation (a-la datadog) and used to create self-improving systems.
I can't even imagine how many joules would be used per function call!
As an experiment, it's kind of cool. I'm kind of at a loss to what useful software you'd build with it though. Surely once you've run the AI function once it would be much simpler to cache the resulting code than repeatedly re-generate it?
Can anyone think of any uses for this?
They're handy for situations where it would be impractical to anticipate the way your input might vary. Like say you want to accept invoices or receipts in a variety of file formats where the data structure varies but you can rely on the LLM to parse and organize. AI Functions lets you describe how that logic should be generated on-demand for the input received, with post-conditions (another Python function the dev write) which define what successful outcomes look like. Morgan wrote about the receipt parser scenario here: https://dev.to/morganwilliscloud/the-python-function-that-im... (FYI I'm on the Strands Agents team)
I've used stuff like this for a hobby project where "effort to write it" vs "times I'm going to use it" is heavily skewed [0]. For production use cases, I can only see it being worth it for things that require using an ML model anyway, like "summarize this document".
[0] e.g. something like the below which I expect to use maybe a dozen times total.
Main routine: In folder X are a bunch of ROM files (iso, bin, etc) and a JSON file with game metadata for each. Look for missing entries, and call [subroutine] once per file (can be called in parallel). When done, summarise the results (successes/failures) based on the now updated metadata.
Subroutine: (...) update XYZ, use metacritic to find metadata, fall back to Google.
You just tell the AI: use as little energy as possible, by whatever means necessary!
Anthropic announces deal to buy 100% of Idaho's potato crop, in return for options, in new energy efficiency push
> run the AI function once it would be much simpler to cache the resulting code than repeatedly re-generate it?
Surely, you'll run a function that does an AI call to cache the resulting code.
The initial version on GitHub does not implement caching or memorization but it's possible and where the project will likely head. (FYI I'm on the Strands Agents team).
I wrote about something along these lines 3 years ago, but used the name "Heisenfunctions," which I think is better :)
https://kylekukshtel.com/incremental-determinism-heisenfunct...
A lot of this was also inspired by Ian Bicking's work here:
I’m quite sure that’s the en state of software except without the software around it. There will only be an AI and interface. For now, though, while tokens cost a non-trivial amount of energy, I think you can do something more useful if you have the LLM modify the program at runtime because it’s just may orders of magnitude cheaper. Fx, use the BEAM, it’s actor model, hot code reloading, and REPL introspection and you can build a program that an LLMs can change, e.g. user says “become a calculator” and “become a pdf to html converter”.
I’m not just making this stuff up of course, got the idea yesterday after reading Karpathy’s tweet about Nanoclaws contribution model (don’t submit PRa with features, submit PRs that tell an llm how to modify the program). Now I can’t concentrate on my day job. Can’t stop thinking about my little elixir beam project.
This looks like Symbolica, except the great thing of what they are doing is that they are setting new ARC-AGI records.
There were people doing this sort of thing 2-3 years ago. What are they doing now?
apparently still writing blog posts on it and posting them to HN
The "Grace" language is based on the same idea, but lets you get the full benefit of specifying static types.
https://github.com/Gabriella439/grace
It's still probably not a great idea.
Why even return Python data structures? You might even return things like "A list that contains in order 1 ... 10, except the number 5".
Why stop there? Just call the LLM with the data and function description and get it to return the result!
(I'll admit that I've built a few "applications" exploring interaction descriptions with our Design team that do exactly this - but they were design explorations that, in effect, used the LLM to simulate a back-end. Glorious, but not shippable.)
That's basically how it works! (with human authored functions that validate the result, automatically providing feedback to the LLM if needed)
Because you often need the result not as a standalone artifact, but as a piece in a rigid process, consisting with well-defined business logic and control flow, with which you can't trust AI yet.
Funny how pydantic is used to parse and not validate but then there are post conditions after parsing which you should parse actually or which can be enforced with json schema and properly implemented constrained sampling on the LLM side.
AWS just shipped an experimental library through strands-labs, AI Functions, which execute LLM-generated code at runtime and return native Python objects. They use automated post-conditions to verify outputs continuously. Unlike generate-and-verify approaches, the AI-generated code runs directly in your application.
> Now consider a different arrangement. The LLM generates code that actually runs inside your application – at call time, every time the function is invoked.
I'm sure there's a lot of effort put into this, god knows why, but I pray I never have to have this in a production environment im on.
This is why RAM is 5x.
Why would I want to do that?
People did this 3 or more years ago. It's funny, but no less dumb now than it was then.
It's in the title. Software 3.1 (years ago).
Is there at least a single benefit using this ?
Obvisouly you have never built software. English is a terrible programming language, you cannot have ambiguity in defining your computation.
Product owners and business people request code in vague English all the time. It's our job to parse it to code using our own judgement.
> you cannot have ambiguity in defining your computation
nobody except for maybe nasa would make software in this scenario.
Could you do this with erlang’s term to binary functionality?
I use Tidewave as my coding agent and it’s able to execute code in the runtime. I believe it’s using Code.eval_string/3, but you should be able to check the implementation. It’s the project_eval tool.
In my experience it’s a huge leap in terms of the agent being able to test and debug functionality. It’ll often write small code snippets to test that individual functions work as expected.
It may seems that a terrible idea, but I think that's good to run quick scripts. It means you can delegate some uninteresting parts the AI is likely to succeed at.
For example, connecting to endpoints, etc... then the logic of your script can run.
I'm normally pessimistic about LLMs but I'll be the contrarian here and suggest there's actually a potential use case for what TFA proposes and it's programmatic/procedural generation for large game worlds.
There is a use for everything. The problem is, people will try to use this to create CRUD apps for no goddamned reason.
>There is a use for everything.
Eventually, perhaps. I've yet to see a use case for blockchains that isn't merely a worse facsimile of something already existing.
But the electron was useless when it was discovered, so maybe one day
Apparently we have blogging-3.0 as well, since the article is littered with AI-isms.
These attempts at generating code that adheres to a whatever spec in Python of all languages are futile and just please investors.
There is a reason that really proving adherence to a spec or making arguments that the spec is reasonable in the first place is hard.
But hey, thinking is hard, let's go AI shopping.
so, this idea looks like follows: expose programmatic access to your program, which potentially operates in destructive manner (no Undo button) on potentially sensitive data; give a sloppy LLM (sloppy - due to its sheer unpredictability and ability to fuck up things a sober human with common sense never ever would) a Python interpreter; then let it run away with it and hope that your boundaries are enough to stop it at the edges YET don't limit the user too much?
nah, I'm skipping this update.
This is a terrible idea
Is this satire?