This paper creates a new benchmark comprised of real remote work tasks sourced from the remote working website Upwork. The best commercial LLMs like Opus, GPT, Gemini, and Grok were tested.
Models released a few days ago, Opus 4.6 and GPT 5.3, haven't been tested yet, but given the performance on other micro-benchmarks, they will probably not be much different on this benchmark.
They didn't test Opus at all, only Sonnet.
One of the tasks was "Build an interactive dashboard for exploring data from the World Happiness Report." -- I can't imagine how Opus4.5 could've failed that.
Check the link to the study. It has been updated for Opus 4.5.
This post really should be edited to say 96% of tasks posted on Upwork. Since we would all expect that to happen.
ChatGPT: when you want spellcheck to argue with you.
Kinda sus that least known model did best and none of the more recent models were tested. Capabilities grow very fast. So things that now routinely succeed rarely ever succeeded even half a year ago.
I mean performance is so bad across the board that this is likely essentially random. Monkeys accidentally doing a bit of Shakespeare.
You think they don't? You think AI can replace programmers, today?
Then go ahead and use AI to fix this: https://gitlab.gnome.org/GNOME/mutter/-/issues/4051
Rewrite it in react it will.