• m463 3 hours ago

    I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.

    Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.

    for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.

    • monkpit an hour ago

      The turning point was around when google stopped honoring Boolean ops and quotation marks

      • gtowey 2 hours ago

        I was just thinking exactly the same. Basic web search has become so horrible that AI is being used as its replacement.

        I found it a sad condemnation of how far the tech industry has fallen into enshittification and is failing to provide tools that are actually useful.

        • gh0stcat 2 hours ago

          We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.

          • johnnyanmac 2 hours ago

            And it'll happen again when AI models start resorting to ads once again.

            • emptybits an hour ago

              Yup. Any LLM recommendation for a product or service should be viewed with suspicion (no different than web search results or asking a commission-based human their opinion). Sponsored placements. Affiliate links. Etc.

              Or when asking an LLM for a comparison matrix or pros and cons between choices ... beware paid placements or sponsors. Bias could be a result of available training data (forgivable?) or due to paid prioritization (or de-prioritizing of competitors!)

        • irjustin 2 hours ago

          FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.

          Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.

          To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.

          • gwern 2 hours ago

            I'm not sure this is even measuring LLMs in the first place! They say the definition is "big data analytics and AI".

            Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?

            • yoyohello13 an hour ago

              Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.

              And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.

              • PeterStuer an hour ago

                Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.

                • AIorNot 2 hours ago

                  Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT

                  These are not the openclaw folks

                  • amarant 2 hours ago

                    What does it even mean to specialise in something and know so little about it? What exactly is this BA person doing?

                    Genuinely confused, I don't get it

                    • shermantanktop 2 hours ago

                      The “corporate” in “corporate AI” can mean tons of work building metrics decks, collecting pain points from users, negotiating with vendors…none of which requires you to understand the actual tool capabilities. For a big company with enough of a push behind it, that’s probably a whole team, none of whom know what they are actually promoting very well.

                      It’s good money if you can live with yourself, and a mortgage and tuitions make it easy to ignore what you are becoming. I lived that for a few years and then jumped off that train.

                      • monkpit an hour ago

                        Sounds like a perfect job for AI!

                • 8cvor6j844qw_d6 2 hours ago

                  Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.

                  For those hearing this at work, better prepare an exit plan.

                  • andsoitis a minute ago

                    > Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.

                    If the manager doesn’t have ideas, it is they who deserve the boot.

                  • nivcmo an hour ago

                    The productivity gains from AI are real but unevenly distributed. In my experience, the biggest wins come from automating the "small" stuff — email triage, scheduling, reminders, follow-ups — not from replacing entire job functions.

                    I built an AI secretary for myself that handles these admin tasks, and it saves me ~12 hours/week. The interesting pattern: knowledge workers who delegate well to AI see outsized gains, while those who treat it as a fancy search engine get marginal improvements.

                    The EU study's 4% average makes sense when you factor in adoption friction, training gaps, and companies bolting AI onto broken workflows. The real productivity leap happens when you redesign processes around AI capabilities, not just layer them on top.

                    • dakolli an hour ago

                      You trust these stochastic text/slot machines for scheduling and follow-ups? Human intention is important for both of these. triage and reminders I can see, but if you send me an llm generated follow up, I'm just going to assume you don't care.

                      • peterlk 5 minutes ago

                        Yes. Other humans are generally accepting of mistakes below some frequency threshold, and frontier models are very robust in my experience

                      • windows2020 43 minutes ago

                        One process redesign that may be considered a moat for AI is employees intending to communicate through a sentence or two first passing the text into their AI of choice and asking it to elaborate. On the other end the colleague uses their AI to summarize the email into a bullet point or two. It's challenging for those that don't use AI to keep up.

                      • FanaHOVA an hour ago

                        You know it's a EU study because they bring up "AI patents" in the first 2 minutes of it, as if they mean anything

                        • lifestyleguru an hour ago

                          AI is affecting everything the same as Covid, as we've been in one single-topic hysteria since 2020. With one short break for attaching bottle caps to their bottles.

                          Not even Russian invasion or collapse of their automotive industry rattled them.