Cool idea, but I keep wondering why there’s no browser version of LM Studio. LM Studio is cool, but setting everything up on ONE device is of no value to me.
The main thing i see here is that prompts never touch a third-party server. If you're in a regulated industry or just don't want proprietary context hitting an API, running inference on your own hardware with encrypted p2p from any device is really cool (and useful.)
(staying in userspace via tsnet without touching kernel sockets is a nice touch too.)
I've been trying to use this all morning, but I keep getting 500/auth errors, even on a completely new device. Can't even login to my LM Studio account right now.
Wow, that's huge! I'll try that at home.