Notably this post got his recommendation rescinded from the Varvara resources README: https://git.sr.ht/~rabbits/uxn/commit/4adbd95fe226169a634131...
Not surprising. the 100R folk are very upfront about their ideology around sustainability and eco justice.
mjk didn't have anything to add to the social economic conversation around llm usage, to the point of even being a bit tone deaf (especially if the 100R were part of his intended audience).
A bit more open constructive conversation to mjk's post on Mastodon would've been helpful for more people to understand the 100R philosophy more intimately; but...it seems the battle lines are drawn. Who wins/loses?
> mjk didn't have anything to add to the social economic conversation around llm usage
There's some semi-apologetic interest in ML, esp. smaller local models, in the "permacomputing" (don't like the term but whatev) sphere. But I don't know if there's much of a conversation around LLMs. With all the hype and how resource intensive and externalities-heavy they are, I can see wanting to draw a line, but it's sad to see it become a purity test.
Lately the discussion around this has had me thinking of the William Köttke quote "not only is it ethical to use the resources of the current system construct the next one, ideally, all the resources of the current system would be used to that end".
I think that if the situation was as dire as it's made out to be (I think it is) and projects like uxn were a serious attempt at a mitigating response (less convinced, as cool as they are), there's room for a conversation about beneficial-detrimental (rather than good-evil). Then we could discuss whether it's a good idea to use LLM-based tools when they are available to help build out infrastructure that runs without them, whether there's a nuance as to at what level of automation we draw the line (Ivan Illich, tools vs machines etc), human augmentation vs replacement, the cognitive load stuff Keeter's post touches on and so on.
Unfortunately, part of the polycrisis seems to be a difficulty in discussing things clearly.
> a bit tone deaf
Agree
I refuse to engage in "LLMs are evil, period" views. That's like walking out into a battlefield with a samurai sword, while your enemy has Gatling guns. You'll be shredded. The pressure to survive means new tools have to be examined and incorporated as and when needed. The resources needed to run a 24B LLM on a gaming GPU are not costing the earth.
It's just the current moral panic. The views aren't even relevant; LLMs will either stick around because they're useful or the industry will collapse if they're not.
And even if people still don't like them, they'll eventually stop caring about hating them.
Eventually it will be only prompts and zero programing as we know it, with a quarter of team sizes, the chosen ones.
That UXN thing looks like it could be trivially jitted on-the-fly. Unless I am misreading their site, and the code doesn't have to reside in the ROM, of course.
Actually no, there's no "ROM", that's just the name for the code that gets loaded into RAM. Even the article talked about self-modifying code...
It's a Von Neumann machine with no instruction cache coherence instructions. JITs for these are not trivial to produce due to every memory store potentially invalidating the JIT code, so you need clever solutions to make that invalidation extremely fast.
There isn't ROM per se. The “ROM” image is always loaded into RAM and is free to self-modify as much as it feels like.
Jealous of the access to Oxide Computer. I would love a rack for myself, but don't have the hundreds of thousands to spare. Or three-phase power.