I wonder if Bret has an update, given LLMs are a thing now.
Bret's life's work is all about direct manipulation and building mental models. LLMs are a layer between humans and our work; they make our tools less direct. My guess is that his advice about LLMs would be to not use them.
This might shed some light: https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relation...
Salient quote under the “AI” question in the FAQ:
> we aim for a computing system that is fully visible and understandable top-to-bottom — as simple, transparent, trustable, and non-magical as possible. When it works, you learn how it works. When it doesn’t work, you can see why. Because everyone is familiar with the internals, they can be changed and adapted for immediate needs, on the fly, in group discussion.
Funny for me, as this is basically my principal problem with AI as a tool.
It’s likely very aesthetic or experiential, but for me, it’s strong: a fundamental value of wanting to work to make the system and the work transparent, shared/sharable and collaborative.
Always liked B Victor a great deal, so it wasn’t surprising, but it was satisfying to see alignment on this.
Yes, his website shows it. Read his latest update and the footnote on the home page: https://worrydream.com/
In the age of LLM powered coding copilots and agents, do programming skills still matter? Shall we focus more on design and algorithmic thinking instead? Computer science != programming, now more true than ever.
Victor's principles for making programming cognitively accessible (seeing state, time, flow) remain relevant with LLMs because they address how humans fundamentally understand systems, not just how we write code.