• mackmcconnell 4 days ago

    I wonder how sustainable the free model is for ai startups. This shows how you can switch easily from one to another. Maybe we are in the golden days like back when Uber was cheap…

    • verdverm 3 days ago

      For Uber, car prices and salaries go up over time naturally

      For computing, silicon has become cheaper and more efficient over time

      I expect a race to the bottom and then some stabilization, much like we have seen in general cloud computing, and have seen with token prices

      • daghamm 3 days ago

        "For Uber, car prices and salaries go up over time naturally"

        Or, your VC money runs out and you start treating your gig workers like crap to save a few cents here and there.

        • bravetraveler 3 days ago

          The bottom can still be pretty high! Storage has become an order of magnitude cheaper, yet I still don't bother with block storage pricing

          Dedicated or S3 is where it's at, still plenty of room for gamification

          • hoerzu 3 days ago

            The point is the VC money funding something unsustainable (burning through billions). Token prices will never be zero IMO.

            • meiraleal 3 days ago

              Yes they will. They are already, if you run local models, that are only getting better. There are 7-11B models that are as good as ChatGPT 3.5

              • chatmasta 2 days ago

                Token costs are not zero when you’re running local models, because you paid for the hardware, and you can’t scale inference indefinitely without paying for more hardware.

                • hoerzu 3 days ago

                  Ok, but running a 11B model gets things 60% of the time right and consumes maximum of electricity of your machine. Not sure if that makes you product the best. Further video generation is very compute intensive. I guess price will decrease over time but the technical advance will allways be for the smarter model

                  • meiraleal 2 days ago

                    > and consumes maximum of electricity of your machine

                    OpenAI isn't a eletricity company so the token prize is still zero for what is worth for VCs.

                    > but the technical advance will allways be for the smarter model

                    Not true. Currently, the small models are advancing much faster with daily new releases

          • tacone 2 days ago

            Nice idea! Hopefully someone will make something like that for Firefox as well.

            • hoerzu 3 days ago

              Author here, you can read the source code here: https://github.com/franz101/tabgpt

              • radicality 3 days ago

                Related, I’ve been using openrouter.ai a lot recently. You can chat with however many models you want simultaneously, set api parameters, use self-moderated models etc.

                • hoerzu 3 days ago

                  Ah very cool, thanks for sharing. In the next version I'll implement going to the next model if you are rate limited :D

                  • meiraleal 3 days ago

                    this is not related, this is spamming.

                    • radicality 2 days ago

                      I don’t know what to tell you - I’m in no way affiliated with that site and simply found it useful for the kind of tasks similar to what the post was about (chatting with multiple llms at once).

                      • meiraleal 2 days ago

                        From the POV of someone posting a Show HN, that's not nice. They are not even similar tools as OP project ruins in the browser and don't make use of APIs, it is a much more innovative approach that you commented nothing, just suggested an alternative (which nobody was looking for, and the suggestion isn't)

                  • 486sx33 2 days ago

                    Dogpile for LLMs, love it!

                    • hoerzu a day ago

                      Woah love that didn't know about it