• segasaturn 17 hours ago

    Looks like AI is in its "fortune favours the bold" era now. I hope the superbowl commercials will at least be good.

    • doctorpangloss 17 hours ago

      What is the upside for him or Stability? How, and why, would James Cameron make it easier to train image and video generators on deblurred, auto-captioned Hollywood movies and TV shows?

      • glimshe 16 hours ago

        He has a lot of Hollywood contacts for making deals.

        • doctorpangloss 16 hours ago

          Deals for what? There aren't any deals that make sense.

        • lofaszvanitt 14 hours ago

          He has a lot of .......MONY.

        • Der_Einzige 17 hours ago

          Until stability AI admits that it only makes money when it actually supports its overwhelmingly NSFW generating user base, stability will continue to be a poor choice for investors.

          I still cannot fathom how it is that image gen is still so far ahead of LLMs in regards to prompting (still can’t weigh prompts for chatGPT but I can on any image generator).

          • throwuxiytayq 17 hours ago

            > I still cannot fathom how it is that image gen is still so far ahead of LLMs in regards to prompting (still can’t weigh prompts for chatGPT but I can on any image generator).

            Is that something you'd really have use for? In case of images it's easier to immediately judge the effect of a prompt change and its direction. LLMs just blurt out a blob of text that you have to read and analyze every time. Even when rerolling the output for the same prompt, you get a very different-looking result. I find comparing a large number of prompt-output pairs very exhausting. Meanwhile, scrolling through a set of outputted images is a breeze, and the output quality is comparable at first glance.

            LLM prompts can also be much longer, consisting of multiple sets/layers of instructions, etc. You just slap a new sentence at the end to modify the behavior. Meanwhile, for image generators it feels like every word counts, and can sometimes take the output in an unintended direction, hence the need for surgical control over word weights.

            • blackeyeblitzar 17 hours ago

              If it can be used for NSFW isn’t that a useful differentiator? A lot of AI tech these days is censored either on moral or political or legal grounds.

            • blackeyeblitzar 17 hours ago

              Isn’t Stability a dying company? There have been alleged controversies around the founder and his wife. And litigation from Getty. And a claim that the main tech was done by some university grad students (?). The CEO left earlier this year but I wonder what differentiates Stability today that Cameron feels is worth joining.

              • ilrwbwrkhv 17 hours ago

                Oh no! I thought it was only Sam Altman and Open AI in the controversies. Are all the AI CEOs mental? I think Anthropic is our only hope now.

                • Mistletoe 17 hours ago
                  • btown 16 hours ago

                    (Note for those who might miss the nuance in a quick skim: the former CEO described in this article is no longer at the company.)

                • undefined 17 hours ago
                  [deleted]