« BackLLM Chat via SSHgithub.comSubmitted by wey-gu 3 days ago
  • demosthanos 17 hours ago

    Skimming the source code I got really confused to see TSX files. I'd never seen Ink (React for CLIs) before, and I like it!

    Previously discussions of Ink:

    July 2017 (129 points, 42 comments): https://news.ycombinator.com/item?id=14831961

    May 2023 (588 points, 178 comments): https://news.ycombinator.com/item?id=35863837

    Nov 2024 (164 points, 106 comments): https://news.ycombinator.com/item?id=42016639

    • amelius 19 hours ago

      I'd rather apt-get install something.

      But that seems not a possibility in the modern days of software distribution, especially with GPU-dependent stuff like LLMs.

      So yeah, I get why this exists.

      • halJordan 5 hours ago

        What is the complaint here? There are plenty of binaries you can invoke through your cli that will query a remote llm api

      • gbacon 15 hours ago

        Wow, that produced a flashback to using TinyFugue in the 90s.

        https://tinyfugue.sourceforge.net/

        https://en.wikipedia.org/wiki/List_of_MUD_clients

        • xigoi 10 hours ago

          It’s not clear from the README what providers it uses and why it needs your GitHub username.

          • gclawes 18 hours ago

            Is this doing local inference? If so, what inference engine is it using?

          • gsibble 18 hours ago

            We made this a while ago on the web:

            https://terminal.odai.chat

            • ryancnelson 15 hours ago

              this is neat.... whose anthropic credits am i using, though? sonnet-4 isn't cheap! would i hit a rate-limit if i used this for daily work?

              • ccbikai 3 days ago

                I am the author, thank you for your support.

                Welcome to help maintain it with me

                • kimjune01 3 days ago

                  hey i just tried it. it's cool! i wish it was more self aware

                  • ccbikai 3 days ago

                    Thank you for your feedback; I will optimize the prompt

                  • t0ny1 18 hours ago

                    does this project request to llm providers?

                    • cap11235 17 hours ago

                      Are you serious? Yeah, its using gemini 2.5 pro without a server, sure yeah.

                    • eisbaw 18 hours ago

                      Why not telnet?

                      • accrual 18 hours ago

                        I'd love to see an LLM outputting over a Teletype. Just tschtschtschtsch as it hammers away the paper feed.

                        • cap11235 17 hours ago

                          Last week or so, there was the LLM finetune posted that speaks like a 19th century Irish author. I look forward a bit to having an LLModem model.

                        • RALaBarge 18 hours ago

                          No HTTPS support

                          • benterix 18 hours ago

                            I bet someone can write an API Gateway for this...

                        • dncornholio 18 hours ago

                          Using React to render a CLI tool is something. I'm not sure how I feel about that. It feels like like 90% of the code is handling issues with rendering.

                          • demosthanos 17 hours ago

                            I mean, it's a thin wrapper around LLM APIs, so it's not surprising that most of the code is rendering. I'm not sure what you're referring to by "handling issues with rendering", though—it looks like a pretty bog standard React app. Am I missing something?