• codexon a day ago

    This paper creates a new benchmark comprised of real remote work tasks sourced from the remote working website Upwork. The best commercial LLMs like Opus, GPT, Gemini, and Grok were tested.

    Models released a few days ago, Opus 4.6 and GPT 5.3, haven't been tested yet, but given the performance on other micro-benchmarks, they will probably not be much different on this benchmark.

    • kolinko a day ago

      They didn't test Opus at all, only Sonnet.

      One of the tasks was "Build an interactive dashboard for exploring data from the World Happiness Report." -- I can't imagine how Opus4.5 could've failed that.

      • codexon 3 hours ago

        Check the link to the study. It has been updated for Opus 4.5.

    • tessitore a day ago

      This post really should be edited to say 96% of tasks posted on Upwork. Since we would all expect that to happen.

      • undefined a day ago
        [deleted]
        • Venn1 a day ago

          ChatGPT: when you want spellcheck to argue with you.

          • scotty79 10 hours ago

            Kinda sus that least known model did best and none of the more recent models were tested. Capabilities grow very fast. So things that now routinely succeed rarely ever succeeded even half a year ago.

            • rsynnott 6 hours ago

              I mean performance is so bad across the board that this is likely essentially random. Monkeys accidentally doing a bit of Shakespeare.

            • undefined 10 hours ago
              [deleted]
              • zb3 a day ago

                You think they don't? You think AI can replace programmers, today?

                Then go ahead and use AI to fix this: https://gitlab.gnome.org/GNOME/mutter/-/issues/4051

                • stoneforger 8 hours ago

                  Rewrite it in react it will.