Well, just let it make a transpiler, e.g. from Oberon90 to C99. I gave this task to Devin and after two days of round-tripping and the LLM increasingly entangling in special cases and producing strange code with more and more redundancy, I stopped the exercise, went back to square one and wrote it myself based on what I already had.
I might be convinced by predictions like the posted one as soon as an LLM is indeed able to independently and correctly solve such a problem, or even add a code generator for yet another target to my compiler, and produce decent code, without my permanent guidance and testing.
It might be true that industry requires less software engineers some day, but it might also well be that they continue to need as much engineers or even more than today, and these people generate ten to hundered times more output together with LLMs than today. Who knows.
Programming died in the '90s; I haven't written any actual code in decades. All I do most of the time is write high-level, abstract prompts for the software agents which generate the code that actually runs.
Somehow I still get paid for this.
LLMs are really good at classical programming because they have plenty of examples to go off of. But what about quantum languages? What if those languages require drastially different syntaxes that we can't reasonable generate from primatives of classical computer languages. Won't we need a human to be trained and generate them?
I do not find that to be the case. Most of the things I'm getting spit out are straight up broken out of the box. Like, missing imports, syntax errors. Directing an LLM feels like having a junior gaslighting while they think you're gaslighting them. Spending as much time working on Prompts to generate code seems foolhardy, because even for the same exact prompt, my code generation result is so ill conditioned, the Prompt isn't source code to the degree of reliability actual source code is. A model may see the same prompt, then generate two entirely different API's as a solution. It's maddening. Made even worse I guess by the fact most hosted setups want to bill you by token. Makes me wonder if I should start billing by LOC to prove a point.
This almost sounds like you could have a setup issue or are working in a legacy codebase and the APIs are not available as context.
You need to make sure it has access to the information it needs by providing docs as context if the code is imported or it will likely hallucinate or try to ill fit a solution into what it does know / can see.
> I do not find that to be the case. Most of the things I'm getting spit out are straight up broken out of the box. Like, missing imports, syntax errors.
How is this even possible? You tell the agent to write such-and-such a feature and it will edit the source files, run the compile, check for issues, fix them, run tests, etc. If there are missing imports or syntax errors it won't even compile and the agent will continue to fix it. Not once since I started using claude have I had an issue with this.
Are you just typing into a chat and copy pasting code? That was a terrible experience for me, don't do it.
Even before recent AI capabilities, writing software was (now is) table stakes.
Deep domain knowledge and expertise is essential. Until you actually work at the coal face in a given industry you don't know the complexity nor the opportunities for improvements. Talking to the workers is good, but you never get the complete picture.
What about operating the software over time?
Perhaps you could be more specific.
For example, for architectural 3D modeling software, the operating the software is being the architect who visualises, designs and refines the building's design.