• simpaticoder an hour ago

    A bit of a "dog bites man" story to note that a CEO of a hot company is hyping the future beyond reason. The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

    How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air? It is a sea change to manufacturers and distributors, but there would still be a need for mechanics and engineers to specify the correct parts, in the correct order, and use the parts to good purpose.

    It is easy to imagine that the inventor of such a technology would probably start talking about printing entire cars - and if you don't think about it, it makes sense. But if you think about it, there are problems. Making the component of a solution is quite different than composing a solution. LLMs exist in the same conditions. Being able to generate code/text/images is of no use to someone who doesn't know what to do with it. I also think this limitation is a practical, tacit solution to the alignment problem.

    • __MatrixMan__ an hour ago

      I dunno, magically fabricating a part is fundamentally different than magically deciding where and when to do so.

      AI can magically decide where to put small pieces of code. Its not a leap to imagine that it will later be good at knowing where to put large pieces of code.

      I don't think it'll get there any time soon, but the boundary is less crisp than your metaphor makes it.

      • rsynnott 26 minutes ago

        > AI can magically decide where to put small pieces of code.

        Magically, but not particularly correctly.

        • refulgentis 44 minutes ago

          > I dunno, magically fabricating a part is fundamentally different than magically deciding where and when to do so.

          Right.

          It sounds to me like you agree and are repeating the comment but are framing as disagreeable.

          I'm sure I'm missing something.

          • NitpickLawyer 10 minutes ago

            The person you replied to suggests that the analogy of a 3d printer building a part does not hold, as LLM-based coding systems are able to both "print" some code, and decide where and when to do so.

            I tend to agree with them. What people seem to miss about LLM coding systems, IMO:

            a) deciding on the capabilities of an LLM to code after a brief browser session with 4o/claude is comparable to waking up a coder in the middle of the night, and having them recite the perfect code right then and there. So a lot of people interact with it that way, decide it's meh, and write it off.

            b) most people haven't tinkered with with systems that incorporate more of the tools human developers use day to day. They'd be surprised of what even small, local models can do.

            c) LLMs seem perfectly capable to always add another layer of abstraction on top of whatever "thing" they get good at. Good at summaries? Cool, now abstract that for memory. Good at q/a? Cool, now abstract that over document parsing for search. Good at coding? Cool, now abstract that over software architecture.

            d) Most people haven't seen any RL-based coding systems yet. That's fun.

            ----

            Now, of course the article is perfectly reasonable, and we shouldn't take what any CEO says at face value. But I think the pessimism, especially in coding, is also misplaced, and will ultimately be proven wrong.

        • barrell an hour ago

          I would say LLMs are much less “produce a perfect part from nothing” and more “cut down a tree and get a the part you ask for, for a random model of car”

          • bee_rider 21 minutes ago

            Car analogies are a little fraught because, I mean, in some cases you can even die if you put just the wrong parts in your car, and they are pretty complex mechanically.

            If you had a printer that could print semi-random mechanical parts, using it to make a car would be obviously dumb, right? Maybe you would use it to make, like, a roller blade wheel, or some other simple component that can be easily checked.

          • ryandrake an hour ago

            > How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air?

            The invention would never see the light of day. If someone were to invent Star Trek replicators, they'd be buried along with their invention. Best Case it would be quickly captured by the ownership class and only be allowed to be used by officially blessed manufacturing companies, and not any individuals. They will have learned their lesson from AI and what it does to scarcity. Western society is hopelessly locked into and dependent on manufacturing scarcity, and the idea that people have to pay for things. The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.

            • d0gsg0w00f 14 minutes ago

              I don't know. Today it's extremely expensive to start new manufacturing competitors. You have to go and ask for tons of capital merely to play the game and likely lose. Anyone with that much capital is probably going to be skeptical of some upstart and consult with established industry leaders. This would be the opportunity for those leaders to step in and take the tech for themselves.

              So to solve this problem you need billions to burn on gambles. I guess that's how we ended up with VC's.

              • CoastalCoder 27 minutes ago

                I'm curious if this is true.

                Are there specific historical examples of this that come to mind?

                • pulvinar 12 minutes ago

                  You're asking for historical examples of great inventions that were hidden from history. I only know of those great inventions that I've hidden...

                  There are a number of counterexamples though. Henry Ford, etc.

                  • l33t7332273 4 minutes ago

                    Just wondering if you’ve actually hidden inventions you would actually consider to be great?

                  • marcosdumay 21 minutes ago

                    There are plenty of examples of good things being unavailable due to market manipulation and corrupt government.

                    But I don't know of anything nearly as extreme as destroying an entire invention. Those tend to stick around.

                    • rsynnott 22 minutes ago

                      The closest real-life thing (and probably what a lot of believers in this particular conspiracy theory are drawing on directly) is probably the Phoebus lightbulb cartel: https://en.wikipedia.org/wiki/Phoebus_cartel

                      It’s rather hard to imagine even something like that (and it’s pretty limited in scope compared to the grand conspiracy above) working today, though; the EC would definitely stomp on it, and even the sleepy FTC would probably bestir itself for something so blatant.

                      • marcosdumay 20 minutes ago

                        It only worked at the time because it benefited the public at large.

                    • tjpnz 20 minutes ago

                      >The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.

                      Then there would be a violent revolution which wrestles it out of their hands. The benefits of such a technology would be immediately obvious to the layman and he would not allow it to be hoarded by a select few.

                    • btown an hour ago

                      > specify the correct parts, in the correct order, and use the parts to good purpose

                      While the attention-based mechanisms of the current generation of LLMs still have a long way to go (and may not be the correct architecture) to achieve requisite levels of spatial reasoning (and of "practical" experience with how different shapes are used in reality) to actually, say, design a motor vehicle from first principles... that future is far more tangible than ever, with more access to synthetic data and optimized compute than ever before.

                      What's unclear is whether The Altman Show will be able to recruit and retain the talent necessary to be the ones to get there; even if it is able to raise an order of magnitude more than competitors, that's no guarantee of success. My guess would be that some of the decisions that have led to the loss of much senior talent will slow their progress in the long run. And, perhaps, it's not a bad thing to give society some extra time to adapt to the ramifications of the current wave of AI progress before another one is unleashed.

                      • addcn 43 minutes ago

                        Printed this out and pasted it into my journal. Going to come back to it in a few years. This touches on something important I can’t quite put into words yet. Some fundamental piece of consciousness that is hard to replicate - desire maybe

                        • kylecazar 43 minutes ago

                          I read your first sentence as CEO of a 'hot dog company' and waited three paragraphs for the analogy to close. Your actual post is a good summarization of what I believe as well.

                          But what's interesting when I speak to laymen is that the hype in the general public seems specifically centered on the composite solution that is ChatGPT. That's what they consider 'AI'. That specific conversational format in a web browser, as a complete product. That is the manifestation of AI they believe everyone thinks could become dangerous.

                          They don't consider the LLM API's as components of a series of new products, because they don't understand the architecture and business models of these things. They just think of ChatGPT and UI prompts (or it's competitor's versions of the same).

                          • bee_rider 27 minutes ago

                            I think people think* of ChatGPT not as the web UI, but as some mysterious, possibly thinking, machine which sits behind the web UI. That is, it is clear that there’s “something” behind the curtain, and there’s some concern maybe that it could get out, but there isn’t really clarity on where the thing stops and the curtain begins, or anything like that. This is more magical, but probably less wrong, than just thinking of it as the prompt UI.

                            *(which is always a risky way of looking at it, because who the hell am I? Neither somebody in the AI field, nor completely naive toward programming, so I might be in some weird knows-enough-to-be-dangerous-not-enough-to-be-useful valley of misunderstanding. I think this describes a lot of us here, fwiw)

                          • throwaway5752 10 minutes ago

                            That's a really good comment and insight, but understandably I think it is aimed at this forum and a technical audience. It landed well for me in terms of the near team impact of LLMs and other models. But outside this forum, I think our field is in a crisis from being very substantially oversold and undersold at the same time.

                            We have a very limited ability to define human intelligence, so it is almost impossible to know how near or far we are from simulating it. Everyone here knows how much a challenge it is to match average human cognitive abilities in some areas, and human brains run at 20 watts. There are people in power that may take technologists and technology executives at their word and move very large amounts of capital on promises that cannot be fulfilled. There was already an AI Winter 50 years ago, and there are extremely unethical figures in technology right now who can ruin the reputation of our field for a generation.

                            On the other hand, we have very large numbers of people around the world on the wrong end of a large and increasing wealth gap. Many of those people are just hanging on doing jobs that are actually threatened by AI. They know this, they fear this, and of course they will fight for their and their families lifestyles. This is a setup for large scale violence and instability. If there isn't a policy plan right now, AI will be suffering populist blowback.

                            Aside from those things, it looks like Sam has lost it. The recent stories about the TSMC meeting, https://news.ycombinator.com/item?id=41668824, was a huge problem. Asking for $7T shows a staggering lack of grounding in reality and how people, businesses, and supply chains work. I wasn't in the room and I don't know if he really sounded like a "podcasting bro", but to make an ask of companies like that with their own capital is insulting to them. There are potential dangers of applying this technology; there are dangers of overpromising the benefits; and neither of them are well served when relatively important people in related industries thing there is a credibility problem in AI.

                            • olliem36 an hour ago

                              Great analogy! I'll borrow this when explaining my thoughts on how LLMs pose to replace software engineers.

                              • rapind an hour ago

                                I tried replacing myself (coding hat) and it was pretty underwhelming. Some day maybe.

                              • surfingdino 30 minutes ago

                                Not enough plastics, glass, or metal in the air to make it happen. You need a scrapyard. Actually, that's how the LLMs treat knowledge. They run around like Wall-E grabbing bits at random and assembling them in a haphazard way to quickly give you something that looks like the thing you asked for.

                                • bboygravity an hour ago

                                  That comparison would make sense if the company was open source, non-profit, promised to make all designs available for free, took Elon Musk's money and then broke all promises includidng the one in its name and started competing with Musk.

                                  • djjfksbxn an hour ago

                                    > A bit of a "dog bites man" story to note that a CEO of a hot company is hyping the future beyond reason.

                                    Why is it in your worldview a CEO “has to lie”?

                                    Are you incapable of imagining one where a CEO is honest?

                                    > The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

                                    I’ll allow it if you stipulate that randomly and without reason when I ask for an alternator it prints me toy dinosaur.

                                    > It is easy to imagine that the inventor of such a technology

                                    As if the unethical sociopath TFA is about is any kind of, let alone the, inventor of genai.

                                    > Being able to generate code/text/images is of no use to someone who doesn't know what to do with it.

                                    Again, conveniently omitting the technology’s ever present failure modes.

                                    • Null-Set 40 minutes ago

                                      A CEO has a fiduciary duty to lie.

                                    • vbezhenar an hour ago

                                      There are plenty of plastic parts in cars and you can print them with 3D printer. I don't think that anything really changed because of that.

                                      • edgyquant an hour ago

                                        Because those are irrelevant to the point being made in the GP

                                      • rendall an hour ago

                                        I think that is GP's point.

                                    • lolinder an hour ago

                                      To recap OpenAI's decisions over the past year:

                                      * They burned up the hype for GPT-5 on 4o and o1, which are great step changes but nothing the competition can't quickly replicate.

                                      * They dissolved the safety team.

                                      * They switched to for profit and are poised to give Altman equity.

                                      * All while hyping AGI more than ever.

                                      All of this suggests to me that Altman is in short-term exit preparation mode, not planning for AGI or even GPT-5. If he had another next generation model on the way he wouldn't have let the media call his "discount GPT-4" and "tree of thought" models GPT-5. If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit, and he likely wouldn't have gotten rid of the super alignment team. His actions are best explained as those of a startup CEO who sees the hype cycle he's been riding coming to an end and is looking to exit before we hit the trough of disillusionment.

                                      None of this is to say that AI hasn't already changed a lot about the world we live in and won't continue to change things more. We will eventually hit the slope of enlightenment, but my bet is that Altman will have exited by then.

                                      • lrg10aa an hour ago

                                        It does look like an exit. Employees were given the chance to cash in some of their shares at $86 billion valuation. Altman is getting shares.

                                        New "investors" are Microsoft and Nvidia. Nvidia will get the money back as revenue and fuel the hype for other customers. Microsoft will probably pay in Azure credits.

                                        If OpenAI does not make profit within two years, the "investment" will turn into a loan, which probably means bankruptcy. But at that stage all parties have already got what they wanted.

                                        • __MatrixMan__ an hour ago

                                          I don't really follow Altman's behavior much, but just in general:

                                          > If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit

                                          If such a thing could exist and was right around the corner, why would you need a company for it? Couldn't the AGI manage itself better than you could? Job's done, time to get a different hobby.

                                          • lolinder an hour ago

                                            If such a thing was right around the corner, the person who controlled it would be the only person left who had any kind of control over their own future.

                                            Why would you sell that?

                                            • __MatrixMan__ 33 minutes ago

                                              I'm not a believer in general intelligence myself, all we have are a small pile of specific intelligences. But if it does exists then it would be godlike to us. I can't guess at the motivations of somebody who would want to bootstrap a god, but I doubt that Altman is so strapped for cash that his primary motivator is coming up with something to sell.

                                            • FrustratedMonky 3 minutes ago

                                              "why would you need a company for it? Couldn't the AGI manage itself better than you could?"

                                              Well, you still have to have the baby, and raise it a little. And wouldn't you still want to be known as the parent of such a bright kid as AGI? Leaving early seems to be cutting down on his legacy, if a legacy was coming.

                                              • azinman2 34 minutes ago

                                                Let’s say you got AGI, and it approximated a not so bright impulsive 12 year old boy. That would be an insane technological leap, but hardly one you’d want running the show.

                                                AGI doesn’t mean smarter than the best humans.

                                                • __MatrixMan__ 26 minutes ago

                                                  For an intelligence to be "General" there would have to be no type of intelligence that it did not have access to (even if its capabilities in that domain were limited). The idea that that's what humans have strikes me as the same kind of thinking that led us to believe that earth was in the center of the universe. Surely there are ways of thinking that we have no concept of.

                                                  General intelligence would be like an impulsive 12 year old boy who could see 6 spatial dimensions and regarded us as cartoons for only sticking to 3.

                                                  • throwaway314155 18 minutes ago

                                                    Humans can survive in space and on the moon because our intelligence is robust to environments we never encountered (or evolved to encounter). That's "all" general intelligence is meant to refer to. General just means robust to the unknown.

                                                    I've seen some use "super" (as in superhuman) intelligence lately to describe what you're getting at.

                                              • m3kw9 21 minutes ago

                                                Dissolve the safety team. You just made everyone stop reading the rest of your post by falsely claiming that

                                                • Mistletoe an hour ago

                                                  For people that don't get the references, this graph is so helpful for understanding the world.

                                                  https://en.wikipedia.org/wiki/Gartner_hype_cycle

                                                  It just keeps happening over and over. I'd say we are at "Negative press begins".

                                                  • Analemma_ an hour ago

                                                    Yeah, everything from OpenAI in the last year suggests they have nothing left up their sleeve, they know the competition is going to catch up very soon, and they're trying to cash out as fast as possible before the market notices.

                                                    (In the Gell-Mann amnesia sense, make sure you take careful note of who was going "OAI has AGI internally!!!" and other such nonsense so you can not pay them any mind in the future)

                                                  • latexr an hour ago

                                                    Distorting the old Chinese proverb, “The best time to stop taking Sam Altman at his word was the first time he opened his mouth. The second best time is now”. We’ve known he’s a scammer for a long time.

                                                    https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

                                                    https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

                                                    • 0x1ceb00da 24 minutes ago
                                                      • stonethrowaway an hour ago

                                                        I would love to read a solid exposé of Worldcoin, but I don’t think I will get that from Buzzfeed or from The Atlantic. Both seemed to be agenda driven and hot headed. I’d like a more impartial breakdown ala AP-style news reporting.

                                                        • rmltn 36 minutes ago

                                                          One of the above links is from the MIT Technology Review ...

                                                      • deepsquirrelnet 2 hours ago

                                                        > At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future

                                                        I think the job of a CEO is not to tell you the truth, and the truth is probably more often than not, the opposite.

                                                        What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures? What is OpenAI worth then?

                                                        • vbezhenar an hour ago

                                                          I'm paying my subscription and I'd probably pay 5x more if they would charge it to keep access to the current service. ChatGPT 4o is incredibly useful for me today, regardless of whether GPT5 will be good or not. I'm not sure how does that reflect OpenAI cost, but those company costs are just bubbles of air anyway.

                                                          • coffeefirst an hour ago

                                                            Would you be willing to share how you're using it?

                                                            I keep hearing from people who find these enormous benefits from LLMs. I've been liking them as a search engine (especially finding things buried in bad documentation), but can't seem to find the life-changing part.

                                                            • vbezhenar an hour ago

                                                              1. Search engine replacement. I'm using it for many queries I asked Google before. I still use Google, but less often.

                                                              2. To break procrastination loops. For example I often can't name a particular variable, because I can see few alternatives and I don't like all of them. Nowadays I just ask ChatGPT and often proceed with his opinion.

                                                              3. Navigating less known technologies. For example my Python knowledge is limited and I don't really use it often, so I don't want to spend time to better learn it. ChatGPT is just perfect for that kind of tasks, because I know what I want to get, I just miss some syntax nuances and I can quickly check the result. Another example is jq, it's very useful tool, but its syntax is arcane and I can't remember it even after years of occasional tinkering with it. ChatGPT builds jq programs like a super-human, I just show example JSON and what I want to get.

                                                              4. Not ChatGPT, but I think Copilot is based on GPT4, and I use Copilot very often as a smart autocomplete. I didn't really adopt it as a code writing tool, I'm very strict at code I produce, but it still helps a lot with repetitive fragments. Things I had to spend 10-20 minutes before, construction regexps or using editor macroses, I can now do with Copilot in 10-20 seconds. For languages like Golang where I must write `if err != nil` after every line, it also helps not to become crazy.

                                                              May be I didn't formulate my thoughts properly. It's not anything irreplaceable and I didn't become 10x programmer. But those tools are very nice and absolutely worth every penny I paid for it. It's like Intellij Idea. I can write Java in notepad.exe, but I'm happy to pay $100/year to Jetbrains and write Java in Idea.

                                                              • henry2023 an hour ago

                                                                I’ve got a local llama 3.2 3B running on my macOS. I can query it for recipes, autocomplete obvious code (this is the only thing GitHub Copilot was useful for). And answer simple questions when provided with a little bit of context.

                                                                All with much lower latency than an HTTP request to a random place, knowing that my data can’t be used to trading anything, and it’s free.

                                                                It’s absolutely insane this is the real world now.

                                                                • Maro 13 minutes ago

                                                                  I'm not the parant commenter, but similarly, I'd also be more than happy to pay more than $20/mo. Probably up to $50/mo wouldn't be painful, and actual pain would start at $100/mo. Why?

                                                                  1. Due to cost-cutting, for the last year I've been doing 3 people's jobs, my original job of Data Science Director, and also Data Eng and Strategy (both Directors were not re-hired after the most recent departures). The most time-consuming of these would normally be the "Strategy" job, since the company I work for is very "strategy" heavy, orgs are constantly asked to write "strategies", updates, reports, etc. My company is also constantly paying McKinsey and similars to do audits, which involves more reports, etc. As I learned, these "strategy" things (where the end-result is always a PPT) are very light on actual technical/domain-specific content. The point is, with ChatGPT (`4o` or `o1`), this content is super-easy for me to generate, I can usually generate a 5-10 page document in 1-2 hours: write down my actual thoughts/plans/perspective or copy/paste my existing notes from elsewhere as very raw bullets, and then tell ChatGPT to "pretend you're a McKinsey consultant at a large corporate, turn this into.. Then I give the 5-10 pager to the 1 remaining in-house "Strategy" guy who will spend a week turning it into a terrible corporate PPT, and then for months it will be pre-circulated and circulated, so there's always time to make changes. Without ChatGPT this whole process would be much more time consuming for me. As an aside, these "strategy" roles will be among the first to reduce in number thanks to ChatGPT, as is already obvious in my org. Once AI can also generate a good enough looking PPT with the right corporate template, that final strategy guy will also be in danger.

                                                                  2. I still do a lot of fun private coding over the weekends, these days I regularly use ChatGPT as an aid/assistant.

                                                                  3. I also write a blog. I try not to use ChatGPT too much, because I don't want to contribute to AI-generated slog on the Internet. But I still use it to give me suggestions, write an intro or conclusion, etc.

                                                                  4. Help me come up with presentation skeletons. I will show it a bunch of blog posts I have written on a topic (eg. A/B testing), and ask it to come up with presentation skeletons and then actual slides/bullets.

                                                                  5. Email tone: write a raw email, and then have it change it to the appropriate corporate tone.

                                                                  6. This is dangerous because ChatGPT is often wrong: ask it for clarification on certain technical topics. Only do this if you're a domain expert and can evaluate the answer.

                                                                  Of course, in all these cases, it's never a one-shot process, it usually takes 5-10 steps in the conversation with `4o` to get useful output, less with `o1`.

                                                                • exitb an hour ago

                                                                  That being said, 4o is functionally in the same league as Claude, which makes this a whole different story. One in which the moat is already gone.

                                                                • raincole an hour ago

                                                                  > What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures?

                                                                  Sam Altman himself doesn't know whether it's the case. Nobody knows. It's the natural of R&D. If you can tell whether an architecture works or not with 100% confidence it's not cutting edge.

                                                                  • barrell an hour ago

                                                                    Sam Altman has explicitly said the next model will be an even bigger jump than between 3 and 4.

                                                                    I think that was before 4o? I know 4o-mini and o1 for sure have come out since he said that

                                                                    • MOARDONGZPLZ an hour ago

                                                                      > Sam Altman has explicitly said the next model will be an even bigger jump than between 3 and 4.

                                                                      You say unironically on an article stating that Sam Altman cannot be taken at his word in a string of comments about him hyping up the next thing so he can exit strongly on the backs of the next greater fool. But seriously, I’m sure GPT-5 will be the greatest leap in history (OpenAI equity holder here).

                                                                    • patcon an hour ago

                                                                      > Nobody knows.

                                                                      I suspect it's a little different. AI models are still made of math and geometric structures. Like mathematicians, researchers are developing intuitions about where the future opportunities and constraints might be. It's just highly abstract, and until someone writes the beautiful Nautilus Mag article that helps a normie see the landscape they're navigating, we outsiders see it as total magic and unknowable.

                                                                      But Altman has direct access to the folks intuiting through it (likely not validated intuitions, but still insight)

                                                                      That's not to say I believe him. Motivations are very tangled and meta here

                                                                    • ThinkBeat 2 hours ago

                                                                      If a CEO lies all the time, and investors make investments because of it, will that not turn out to become a problem for the CEO?

                                                                      • 42lux an hour ago

                                                                        Well lets take Tesla, FSD and Elon as an example were a judge just ruled[0] that it's normal corporate puffery and not lies.

                                                                        [0] https://norcalrecord.com/stories/664710402-judge-tosses-clas...

                                                                        • Zigurd an hour ago

                                                                          The 5th circuit's loose attitude about deception is why all Elon's Xs live in Texas, or will soon.

                                                                          It's not trivial. "Mere puffery" has netted Tesla about $1B in FSD revenue.

                                                                          • edgyquant an hour ago

                                                                            Elons lies are way different. He mostly is just optimistic about time frames. Since Sam Altman and ChatGPT became mainstream the narrative is about the literal end of the world and OpenAI, and their influencer army, has made doomerism their entire marketing strategy.

                                                                            • kevin_thibedeau 43 minutes ago

                                                                              When you're taking in money it's not blind optimism. They sold gullible people a feature that was promised to be delivered in the near future. Ten years later it's clearly all a fraud.

                                                                              • moogly 38 minutes ago

                                                                                > He mostly is just optimistic about time frames

                                                                                Perhaps if you have a selective memory. There's plenty of collections of straight-up set-in-stone falsehoods on the internet to find, if you're interested.

                                                                            • Agentus an hour ago

                                                                              Well i think this conversation chain has played out multiple times all the way spanning back to at least Ben Edison. Often times nothing is certain in a business where you need to take a chance trying to bring an imagined idea into fruition with millions of investor money.

                                                                              Ceos are more often come from marketing backgrounds than other disciplines for the very reason they have to sell stakeholders, employees, investors on the possibilities. If a ceos myth making turns out to be a lie 50 to 80 percent of the time then hes still a success as with Edison, Musk, Jobs, and now Altman.

                                                                              But i think AI ceos seem to be imagining and peddling wilder fancier myths than the average. If AI technology pans out then i dont feel theyre unwarranted. I think theres enough justification but im biased and have been doing AI for 10 years.

                                                                              To ur question, If a ceos lies dont accidently turn true eventually as with the case of Holmes then yes its a big problem.

                                                                              • butterfly42069 an hour ago

                                                                                It seems that continuously promising something is X weeks/months/years away is seen as optimism and belief, not blatant disregard for facts.

                                                                                I'm sure the defence is always, "but if we just had a bit more money, we would've got it done"

                                                                                • Zigurd an hour ago

                                                                                  Elizabeth Holmes would like a referral to lawyer who could make that case.

                                                                                  • butterfly42069 34 minutes ago

                                                                                    I think her end product was too clearly defined for anything she was doing to be passed as progress. I don't think you could make the case for her.

                                                                                    You can make a case that partial self driving is a route to FSD, the ISS is en route to Mars and (you can make a potentially slightly less compelling case) LLMs are on the way to AGI.

                                                                                    No one can make a case that lady was en route to the tech she promised

                                                                                  • blindriver an hour ago

                                                                                    That’s not what happened to Theranos and Elisabeth Holmes

                                                                                    • butterfly42069 39 minutes ago

                                                                                      Well I didn't say everyone could pull it off successfully

                                                                                      I think the more abstract and less defined the end goal is, the easier it is to make everything look like progress.

                                                                                      The blood testing lady was a pass/fail really. FSD/AGI are things where you can make anything look like a milestone. Same with SpaceX going to Mars.

                                                                                      • kloop 34 minutes ago

                                                                                        They made medical claims. That's a very bad idea if you aren't 100% sure

                                                                                    • rubyfan an hour ago

                                                                                      depends which investors lose money

                                                                                      • angulardragon03 an hour ago

                                                                                        This is the Tesla Autopilot playbook, which seems to continue to work decently for that particular CEO

                                                                                        • tux3 an hour ago

                                                                                          I'm not sure about the decency of it.

                                                                                          I remember all the articles praising the facade of a super-genius. It's a stark contrast to today.

                                                                                          People write about his trouble or his latest outburst like they would a neighborh's troubled kid. There's very little decency in watching people sink like that.

                                                                                          What's left after reality reassesserts itself, after the distortion field is gone? Mostly slow decline. Never to reach that high again.

                                                                                          • ben_w an hour ago

                                                                                            The difference between Tesla and Nikola is that some false claims matter more than others: https://www.npr.org/2022/10/14/1129248846/nikola-founder-ele...

                                                                                            Given Altman seems to be extremely vague about exact timelines and mainly gives vibes, he's probably doing fine. Especially as half the stuff he says is, essentially, to lower expectations rather than to raise them.

                                                                                        • mrbungie an hour ago

                                                                                          A typical CEO's job is to guard and enforce a narrative. Great ones also work at adding to the narrative.

                                                                                          But it is always about the narrative.

                                                                                          • Aeolun 2 hours ago

                                                                                            If you need someone to tell you the truth you don’t need a CEO.

                                                                                            What you need a CEO for is to sell you (and your investors) a vision.

                                                                                            • hn72774 an hour ago

                                                                                              Without the truth, a vision is a hallucination.

                                                                                              It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.

                                                                                              In reality, this vision you speak of is more like the blind leading the blind.

                                                                                              • squarefoot an hour ago

                                                                                                > It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.

                                                                                                If so many people wouldn't fall for claims without any proof, religions themselves would not exist.

                                                                                                • ergonaught 41 minutes ago

                                                                                                  All knowledge is incomplete and partial, which is another way of saying "wrong", therefore all "vision" is hallucination. This discussion would not be happening without hallucinators with a crowd sharing their delusion. Humanity generally doesn't find the actual truth sufficiently engaging to accomplish much beyond the needs of immediate survival.

                                                                                                • llamaimperative an hour ago

                                                                                                  This is silly. Delusion kills more companies than facing reality with honesty does.

                                                                                                • antirez an hour ago

                                                                                                  Not possible since Claude is effectively GPT5 level in most tasks (EDIT: coding is not one of them). OpenAI lost the lead months ago. Altman talking about AGI (may take decades or years, nobody knows) is just the usual crazy Musk-style CEO thing that is totally safe to ignore. What is interesting is the incredible steady progresses of LLMs so far.

                                                                                                  • diggan an hour ago

                                                                                                    > Claude is effectively GPT5 level

                                                                                                    Which model? Sonnet 3.5? I subscribed to Claude for while to test Sonnet/Opus, but never got them to work as well as GPT-4o or o1-preview. Mostly tried it out for coding help (Rust and Python mainly).

                                                                                                    Definitely didn't see any "leap" compared to what OpenAI/ChatGPT offers today.

                                                                                                    • antirez an hour ago

                                                                                                      Both, depending on the use case. Unfortunately Claude is better in almost every regard than ChatGPT but fo coding so far. So you would not notice improvements if you test it only for code. Where it shines is understanding complex things and ideas in long text, and the context window is AFAIK 2x than ChatGPT.

                                                                                                      • diggan an hour ago

                                                                                                        Tried it for other things too, but then they just seem the same (to me). Maybe I'll give it another try, if it has improved since last time (2-3 months maybe?). Thanks!

                                                                                                  • latexr an hour ago

                                                                                                    What a shitty world we constructed for ourselves, where the highest positions of power with the highest monetary rewards depend on being the biggest liar. And it’s casually mentioned and even defended as if that’s in any way acceptable.

                                                                                                    https://www.newyorker.com/cartoon/a16995

                                                                                                    • naveen99 2 hours ago

                                                                                                      The future is not binary, it’s a probability.

                                                                                                      • meiraleal an hour ago

                                                                                                        > What if gpt5 is vaporware

                                                                                                        OpenAI decides what they call gpt5. They are waiting for a breakthrough that would make people "wow!". That's not even very difficult and there are multiple paths. One is a much smarter gpt4 which is what most people expect but another one is a real good voice-to-voice or video-to-video feature that works seamlessly the same way chatgpt was the first chatbot that made people interested.

                                                                                                        • deepsquirrelnet 37 minutes ago

                                                                                                          It’s more than that. Because of what they’ve said publicly and already demonstrated in the 3->4 succession, they can’t release something incremental as gpt5.

                                                                                                          Otherwise people might get the impression that we’re already at a point of diminishing returns on transformer architectures. With half a dozen other companies on their heels and suspiciously nobody significantly ahead anymore, it’s substantially harder to justify their recent valuation.

                                                                                                      • bhouston 14 minutes ago

                                                                                                        Sam Allan has to be a promoter and true believer. It is his job to do that and he does have new tech that didn’t exist before and it is game changing.

                                                                                                        The issue is more that the company is hemorrhaging talent, and doesn’t have a competitive moat.

                                                                                                        But luckily this doesn’t affect most of us, rather it will only possibly harm his investors if it doesn’t work out.

                                                                                                        If he continues to have access to resources and can hire well and the core tech can progress to new heights, he will likely be okay.

                                                                                                        • rubyfan an hour ago

                                                                                                          I sort of wish there was a filter for my life that would ignore everything AI (stories about AI, people talking about AI and of course content generated by AI).

                                                                                                          The world has become a less trustworthy place for a lot of reasons and AI is only making it worse, not better.

                                                                                                        • austinkhale an hour ago

                                                                                                          There are legit criticisms of Sam Altman that can be levied but none of them are in this article. This is just reductive nonsense.

                                                                                                          The arguments are essentially:

                                                                                                          1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.

                                                                                                          2. Sam _only_ has a record as a deal maker, not a physicist.

                                                                                                          3. AI can sometimes do bad things & utilizes a lot of energy.

                                                                                                          I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.

                                                                                                          • BearOso an hour ago

                                                                                                            I think LLM technology, not necessarily all of CNN, has plateaued. We've used up all the human discourse, so there's nothing to train it on.

                                                                                                            It's like fossil fuels. They took billions of years to create and centuries to consume. We can't just create more.

                                                                                                            Another problem is that the data sets are becoming contaminated, creating a reinforcement cycle that makes LLMs trained on more recent data worse.

                                                                                                            My thoughts are that it won't get any better with this method of just brute-forcing data into a model like everyone's been doing. There needs to be some significant scientific innovations. But all anybody is doing is throwing money at copying the major players and applying some distinguishing flavor.

                                                                                                            • senko 5 minutes ago

                                                                                                              [delayed]

                                                                                                          • thruway516 22 minutes ago

                                                                                                            "Altman is no physicist. He is a serial entrepreneur, and quite clearly a talented one"

                                                                                                            Not sure the record supports that if you remove OpenAi which is a work-in-progress and supposedly not going too great at the moment. A talented 'tech whisperer' maybe?

                                                                                                            • mrangle a minute ago

                                                                                                              I can't imagine taking The Atlantic seriously on anything. My word. You aren't actually supposed to read the endless ragebait.

                                                                                                              Contrary to the Atlantic's almost always intentionally misleading framing, the "dot com boom" did in fact go on to print trillions later and it is still printing them. After what was an ultimately marginal if account clearing dip for many.

                                                                                                              I say that as someone who would be deemed to be an Ai pessimist, by many. For example, calling what we have Ai is handwaving and it will be limited within the bounds of what it actually is. This is an obvious statement. Though, what it is is also a convincing "Ai" parlor trick to a point. And it is useful, beyond the issue of that blurred perception.

                                                                                                              But its wildly early to declare anything to be "what it is" and only that. Just like it was and is wild to declare the dot com boom to be over.

                                                                                                              • AndrewKemendo 13 minutes ago

                                                                                                                OpenAI cant be working on AGI because they have no arc for production robotics controllers

                                                                                                                AGI cannot exist in a box that you can control. We figured that out 20 years ago.

                                                                                                                Could they start that? Sure theoretically. However they would have to massively pivot and nobody at OAI are robotics experts

                                                                                                                • mppm 22 minutes ago

                                                                                                                  Around the time of the board coup and Sam's 7-trillion media tour, there were multiple, at the time somewhat credible, rumors of major breakthroughs at Open AI -- GPT5, Q*, and possibly another unnamed project with wow-factor. However, almost a year has passed, and OpenAI has only made incremental improvements public.

                                                                                                                  So my question is: What does the AI rumor mill say about that? Was all that just hype-building, or is OpenAI holding back some major trump card for when they become a for-profit entity?

                                                                                                                  • ilrwbwrkhv 18 minutes ago

                                                                                                                    All hype. Remember when the whole "oh we are so scared to release this model" happen back in the day and it was worse than GPT3?

                                                                                                                    All of these doing the rounds of foreign governments and acting like artificial general intelligence is just around the corner is what got him this fundraising round today. It's all just games.

                                                                                                                  • goles 2 hours ago
                                                                                                                    • bambax 28 minutes ago

                                                                                                                      > Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics. He predicts that we may have an all-powerful superintelligence “in a few thousand days.”

                                                                                                                      It seems fair to say Altman has completed his Musk transformation. Some might argue it's inevitable. And indeed Bill Gates' books in the 90s made a lot of wild promises. But nothing that egregious.

                                                                                                                      • KaoruAoiShiho an hour ago

                                                                                                                        Nobody is taking Sam Altman at his word lol, these ideas about intelligence have been believed for a long time in the tech world and the guy is just the best at monetizing them. People are pursuing this path because of a general conviction in these ideas of themselves, I guess to people like Atlantic writers Sam Altman is the first time they've encountered them but it really has nothing to do with Sam Altman.

                                                                                                                        • vasilipupkin an hour ago

                                                                                                                          Don't listen to David Karpfs of the world. Did he predict chat gpt? if you asked him in 2018, he would have said AI will never write a story

                                                                                                                          now you can use AI to easily write the type of articles he produces and he's pissed.

                                                                                                                          • throwgfgfd25 29 minutes ago

                                                                                                                            > now you can use AI to easily write the type of articles he produces and he's pissed.

                                                                                                                            You really cannot.

                                                                                                                          • hnadhdthrow123 an hour ago

                                                                                                                            Will Human ego, greed, selfishness lead to our destruction? (AI or not)

                                                                                                                            https://news.ycombinator.com/item?id=35364833

                                                                                                                            • m2024 an hour ago

                                                                                                                              Hopefully, with as little collateral damage to the remaining life on this planet.

                                                                                                                            • thelastgallon 2 hours ago

                                                                                                                              "Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot." - Sam Altman, https://ia.samaltman.com/

                                                                                                                              Reality: AI needs unheard amounts of energy. This will make climate significantly worse.

                                                                                                                              • meroes an hour ago

                                                                                                                                Wow that quote is absolutely crazy. “Discovery of all of physics”. I think that’s the worst puffery/lie I’ve ever heard by a CEO. Science requires experiments so the next LLM will have to be able to design CERN+++ level experiments down to the smallest detail. But that’s not even the hard part, the hard part is the actual energy requirements, which are literally astronomical. So it’s going to either have to discover a new method of energy generation along the way or something else crazy. The true barrier for physics is energy right now. But that’s just the next level of physics, that’s not ALL of it.

                                                                                                                                • moogly 36 minutes ago

                                                                                                                                  I can't imagine anyone writing that paragraph of his with a straight face. The chuckling must've lasted a week.

                                                                                                                                • jt2190 30 minutes ago

                                                                                                                                  > AI needs unheard amounts of energy…

                                                                                                                                  … and it always will? It seems terribly limiting to stop exploring the potential of this technology because it’s not perfect right now. Energy consumption of AI models does not feel like an unsolvable problem, just a difficult one.

                                                                                                                                  • golergka an hour ago

                                                                                                                                    So far it seems that AIs appetite for energy might finally pushing western countries back to nuclear, which would make climate significantly better.

                                                                                                                                    • UncleMeat an hour ago

                                                                                                                                      A world where we produce N watt-hours of energy without nuclear plants and a world where we produce N+K watt-hours of energy with K watt-hours coming from nuclear has exactly the same effect on the climate.

                                                                                                                                      • kibwen an hour ago

                                                                                                                                        Unfortunately no, this is not how it works.

                                                                                                                                        The relative quantity of power provided by nuclear (or renewables, for that matter) is NOT our current problem. The problem is the absolute quantity of power that is provided by fossil fuels. If that number does not decrease, then it does not matter how much nuclear or renewables you bring online. And nuclear is not cheaper than fossil fuels (even if you remove all regulation, and even if you build them at scale), so it won't economically incentivize taking fossil fuel plants offline.

                                                                                                                                        • collingreen an hour ago

                                                                                                                                          Nuclear and everything we were using before is probably not better than just everything we were using before. Hopefully we can continue to reduce high emission or otherwise damaging power production even while power requirements grow.

                                                                                                                                          • golergka an hour ago

                                                                                                                                            Power means making things and providing services that people want, that make their lives better. Which is a good thing. We need more power, not less.

                                                                                                                                            • g-b-r an hour ago

                                                                                                                                              I guess that global warming is a good thing, then

                                                                                                                                          • ccppurcell an hour ago

                                                                                                                                            Nuclear cannot "make the climate better" but can perhaps slow the destruction down, only if it replaces fossil fuels, not if it is added on top due to increased energy consumption. In that case it's at best neutral.

                                                                                                                                            • ben_w an hour ago

                                                                                                                                              Only if they build the reactors then go bankrupt leaving the rectors around for everyone else. Likewise if they build renewables to power the data centres.

                                                                                                                                          • melenaboija an hour ago

                                                                                                                                            It is weird that one of the most valued markets (openai, microsoft investments, nvidia gpus, ...) is based on a stack that is available to anyone that can pay for the resources to train the models and that in my opinion still has to deliver to the expectations that have been created around it.

                                                                                                                                            Not saying it is a bubble but something seems imbalanced here.

                                                                                                                                            • jasode an hour ago

                                                                                                                                              >one of the most valued markets ... is based on a stack that is available to anyone

                                                                                                                                              The sophisticated investors are not betting on future increasing valuations based on current LLMs or the next incremental iterations of it. That's a "static" perspective based on what outsiders currently see as a specific product or tech stack.

                                                                                                                                              Instead, you have to believe in a "dynamic" landscape where OpenAI the organization of employees can build future groundbreaking models that are not LLMs but other AI architectures and products entirely. The so-called "moat" in this thinking would be the "OpenAI team to keep inventing new ideas beyond LLM". The moat is not the LLM itself.

                                                                                                                                              Yes, if everyone focuses LLMs, it does look like Meta's free Llama models will render OpenAI worthless. (E.g. famous memo : https://www.google.com/search?q=We+have+no+Moat%2C+and+Neith...)

                                                                                                                                              As an analogy, imagine that in the 1980s, Microsoft's IPO and valuation looks irrational since "writing programming code on the Intel x86 stack" is not a big secret. That stock analysis would then logically continue saying "Anybody can write x86 software such as Lotus, Borland, etc." But the lesson learned was that the moat was never the "Intel x86 stack"; the moat was really the whole Microsoft team.

                                                                                                                                              That said, if OpenAI doesn't have any future amazing ideas, their valuation will crash.

                                                                                                                                              • silvestrov 31 minutes ago

                                                                                                                                                I'd say that Microsoft's moat was the copyright law and ability to bully the hardware companies with exclusive distribution contracts.

                                                                                                                                                Writing a new DOS (or Windows 3) from scratch is something a lot of developers could do.

                                                                                                                                                They just couldn't do it legally.

                                                                                                                                                And thus it was easy to bully Compaq and others into only distributing PCs with DOS/Windows installed. For some time you even had to pay the Microsoft fee when you wanted a PC with Linux installed.

                                                                                                                                                • melenaboija an hour ago

                                                                                                                                                  I agree in most of what you said. The main problem for me is that I dont see LLMs are as solid foundations to create a company as technology progress from the 80s.

                                                                                                                                                  Im 42 though and already feeling old to understand the future lol

                                                                                                                                                • throwaway42668 an hour ago

                                                                                                                                                  It's okay to say it. It's a bubble.

                                                                                                                                                  It was just the next in line to be inflated after crypto.

                                                                                                                                                  • superluserdo an hour ago

                                                                                                                                                    I wouldn't write it off as a bubble, since that usually implies little to no underlying worth. Even if no future technical progress is made, it has still taken a permanent and growing chunk of the use case for conventional web search, which is an $X00bn business.

                                                                                                                                                    • thegeomaster an hour ago

                                                                                                                                                      A bubble doesn't necessarily imply no underlying worth. The dot-com bubble hit legendary proportions, and the same underlying technology (the Internet) now underpins the whole civilization. There is clearly something there, but a bubble has inflated the expectations beyond reason, and the deflation will not be kind on any player still left playing (in the sense of AI winter), not even the actually-valuable companies that found profitable niches.

                                                                                                                                                • whoiscroberts 21 minutes ago

                                                                                                                                                  Any person that thinks “automating human thought” is good for humanity is evil.

                                                                                                                                                  • rsynnott 29 minutes ago

                                                                                                                                                    I mean, in general, if you’re taking CEOs at their word, and particularly CEOs of tech companies at their word, you’re gonna have a bad time. Tech companies, and their CEOs, predict all manner of grandiose nonsense all the time. Very little of it comes to pass, but through the miracle of cognitive biases some people do end up filtering out the stuff that doesn’t happen and declaring them visionary.

                                                                                                                                                    • est an hour ago

                                                                                                                                                      sama is best match with today's LLM because of the "scaling law" like Zuckerberg described. Everyone is burning cash to race to the end, but the billion dollar question is, what is the end for transformer based LLM? Is there an end at all?

                                                                                                                                                      • vbezhenar an hour ago

                                                                                                                                                        The end is super-human reasoning along with super-human intuition, based on humanity knowledge.

                                                                                                                                                        • plaidfuji 41 minutes ago

                                                                                                                                                          Sure, and the end of biotech is perfect control of the human body from pre-birth until death, but innovation has many bottlenecks. I would be very surprised if the bottlenecks to LLM performance are compute and model architecture. My guess is it’s the data.

                                                                                                                                                          • Mistletoe 32 minutes ago

                                                                                                                                                            Are you sure you can get to that with transformers?

                                                                                                                                                            https://www.lesswrong.com/posts/SkcM4hwgH3AP6iqjs/can-you-ge...

                                                                                                                                                        • flenserboy an hour ago

                                                                                                                                                          that ship sailed a long time ago

                                                                                                                                                          • wicndhjfdn an hour ago

                                                                                                                                                            Our economy runs on market makers AI, Blockchain, whether they are what they seem in the long run is beside the point. They're sole purpose is to generate economic activity. Nobody really cares if they pan out.

                                                                                                                                                            • m3kw9 23 minutes ago

                                                                                                                                                              The Atlantic is really going out of their way to hate on Altman. That publication has always been a bit of a whack job of an outfit

                                                                                                                                                              • throwgfgfd25 17 minutes ago

                                                                                                                                                                > That publication has always been a bit of a whack job of an outfit

                                                                                                                                                                This is a bizarre take about a 167-year-old, continuously published magazine.

                                                                                                                                                              • jacknews 41 minutes ago

                                                                                                                                                                Does anyone take him at face value anyway?

                                                                                                                                                                The other issue is that AI's 'boundless prosperity' is a little like those proposals to bring an asteroid made of gold back to earth. 20m tons, worth $XX trillion at current prices, etc. The point is, the gold price would plummet, at the same time as the asteroid, or well before, and the promised gains would not materialize.

                                                                                                                                                                If AI could do everything, we would no longer be able (due to no-one having a job), let alone willing, to pay current prices for the work it would do, and so again, the promised financial gains would not materialize.

                                                                                                                                                                Of course in both cases, there could be actual societal benefits - abundant gold, and abundant AI, but they don't translate directly to 'prosperity' IMHO.

                                                                                                                                                                • thwg an hour ago

                                                                                                                                                                  TSMC has stopped way earlier.

                                                                                                                                                                  • m3kw9 20 minutes ago

                                                                                                                                                                    You all just sit back and not pick at every word he says, just sit calmly and let him cook. And he’s been cooking

                                                                                                                                                                    • klabb3 an hour ago

                                                                                                                                                                      > Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics.

                                                                                                                                                                      Yes. We've been through this again and again. Technology does not follow potential. It follows incentive. (Also, “all of physics”? Wtf is he smoking?)

                                                                                                                                                                      > It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.

                                                                                                                                                                      I mean, everything good in life uses energy, that’s not AIs fault per se. However, we should absolutely evaluate tech anchored in the present, not the future. Especially with something we understand so poorly like emergent properties of AI. Even when there’s an expectation of rapid changes, the present is a much better proxy than yet-another sociopath with a god-complex whose job is to be a hype-man. Everyone’s predictions are garbage. At least the present is real.

                                                                                                                                                                      • twelve40 2 hours ago

                                                                                                                                                                        well the good news is all that stuff comes with an expiration date, after which we will know if this is our new destiny or yet another cloud of smoke.

                                                                                                                                                                        This is a good reminder:

                                                                                                                                                                        > Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models (LLMs) so that humanity would have time to address the social consequences of the impending revolution

                                                                                                                                                                        In 2024, ChatGPT is a weird toy, my barber demands paper cash only (no bitcoin or credit cards or any of that phone nonsense, this is Silicon Valley), I have to stand in line at USPS and DMV with mindless paper-shuffling human robots, marveling at humiliating stupidity of manual jobs, robotaxis are still almost here, just around the corner, as always. Let's check again in a "coupe of thousand days" i guess!

                                                                                                                                                                        • namaria an hour ago

                                                                                                                                                                          I've said this before, at the root of all these technological promises lies a perpetual motion machine. They're all selling the reversal of thermodynamics.

                                                                                                                                                                          Any system complex enough to be useful has to be embedded in an ever more complex system. The age of mobile phone internet rests on the shoulders of an immense and enormously complex supply chain.

                                                                                                                                                                          LLMs are capturing low entropy from data online and distilling it for you while producing a shitton of entropy on the backend. All the water and energy dissipated at data centers, all the supply chains involved in building GPUs at the rate we are building. There will be no magical moment when it's gonna yield more low entropy than what we put in on the other side as training data, electricity and clean water.

                                                                                                                                                                          When companies sell ideas like 'AGI' or 'self driving cars' they are essentially promising you can do away with the complexity surrounding a complex solution. They are promising they can deliver low entropy on a tap without paying for it in increased entropy elsewhere. It's physically impossible.

                                                                                                                                                                          You want human intelligence to do work, you need to deal with all the complexities of psychology, economics and politics. You want complex machines to do autonomous work, you need an army of people behind it. What AGI promises is, you can replace the army of people with another more complex machine. It's a big bald faced lie. You can't do away with the complexity. Someone will have to handle it.

                                                                                                                                                                          • ben_w an hour ago

                                                                                                                                                                            > It's physically impossible

                                                                                                                                                                            Your brain is proof to the contrary. AGI means different things to everyone, but a human brain definitely counts as "general intelligence", that implemented in silicon is enough to get basically all the things promised by AGI: if that's done at the 20 watts per brain that biology manages, then all of humanity can be simulated within the power envelope of the USA electrical grid… three times over.

                                                                                                                                                                            • nicomeemes 44 minutes ago

                                                                                                                                                                              You're lazily mixing metaphors here. This is the problem in all such discussions, it often gets reduced to some combination of hand waving and hype training. "AGI" means different things to everyone, okay? Then it's a meaningless term. It's like saying hey with a quantum computer of enormous size, we could simulate molecular interactions at a level impossible with current technology. I would love for us to be able to do that- but where is the evidence it is even possible?

                                                                                                                                                                              • ben_w 37 minutes ago

                                                                                                                                                                                There's no metaphor here, I meant literally doing those things.

                                                                                                                                                                                I followed up the point about AGI meaning different things by giving a common and sufficient standard of reference.

                                                                                                                                                                                Your brain is evidence that it's "even possible".

                                                                                                                                                                              • namaria an hour ago

                                                                                                                                                                                > a human brain definitely counts as "general intelligence", that implemented in silicon is enough to get basically all the things promised by AGI: if that's done at the 20 watts per brain that biology manages, then all of humanity can be simulated within the power envelope of the USA electrical grid… three times over.

                                                                                                                                                                                So far the only thing that has been proven is we can get low entropy from all the low entropy we've published on the internet. Will it get to a point where models can give us more low entropy than what is present in the training data? Categorically: no.

                                                                                                                                                                                • ben_w 35 minutes ago

                                                                                                                                                                                  You are using "entropy" in a way I do not recognise.

                                                                                                                                                                                  Whatever you mean, our brains prove it's possible to have a system that uses 20 watts to demonstrate human-level intelligence.

                                                                                                                                                                            • simonw an hour ago

                                                                                                                                                                              “robotaxis are still almost here, just around the corner, as always”

                                                                                                                                                                              We have them in San Francisco now (and Los Angeles and Phoenix, and Austin soon.)

                                                                                                                                                                              • aithrowawaycomm an hour ago

                                                                                                                                                                                By "robotaxi" I think they meant loosely "personal self driving car," not an automated taxi service.

                                                                                                                                                                                Waymo's overstated[1] success has let self-driving advocates do an especially pernicious bit of goalpost-shifting. I have been a self-driving skeptic since 2010, but if you had told me in 2010 that in 10-15 years we have robotaxis that were closely overseen by remote operators who can fill in the gaps I would have thought that was much more plausible than fully autonomous vehicles. And the human operators are truly critical, even more so than a skeptic like me assumed: https://www.nytimes.com/interactive/2024/09/03/technology/zo... (sadly the interactive is necessary here and archives don't work, this is a gift link)

                                                                                                                                                                                I still think fully autonomous vehicles on standard roads is 50+ years out. The argument was always that ~95% of driving is addressable by deep learning but the remaining ~5% involves difficult problem-solving that cannot be solved by data because the data does not exist. It will require human oversight or an AI architecture which is capable of deterministic reasoning (not transformers), say at least at the level of a lizard. Since we have no clue how to make an AI as smart as a lizard, that 5% problem remains utterly intractable.

                                                                                                                                                                                [1] I have complained for years that Waymo's statisticians are comparing their cars to all human drivers when they should be comparing it to lawful human drivers whose vehicles are well-maintained. Tesla FSD proves that self-driving companies will respond to consumer demand for vehicles that speed and run red lights.

                                                                                                                                                                              • iamsrp 33 minutes ago

                                                                                                                                                                                > robotaxis are still almost here, just around the corner, as always.

                                                                                                                                                                                You can walk to where they're waiting for you.

                                                                                                                                                                                • dagw an hour ago

                                                                                                                                                                                  barber demands paper cash only...I have to stand in line at USPS and DMV

                                                                                                                                                                                  Surely this is just a case of the future not being evenly distributed. All of these 'problems' are already solved and the solution is implemented somewhere, just not where you happen to be.

                                                                                                                                                                                  • twelve40 41 minutes ago

                                                                                                                                                                                    You're probably right. I'll wait until it gets better distributed. It's just that personally, it's been hard to reconcile the grandiose talk with what's actually around me, that's all.

                                                                                                                                                                                • nomilk an hour ago

                                                                                                                                                                                  tl;dr author complains that Sam's predictions of the future of AI are inflated (but doesn't offer any of his own), and complains that AI tools that surprised us last year look mundane now.

                                                                                                                                                                                  The article is written to appeal to people who want to feel clever casually slagging off and dismissing tech.

                                                                                                                                                                                  > it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.

                                                                                                                                                                                  What a pathetic observation. Does the author not recall how bad chatbots were pre-LLMs?

                                                                                                                                                                                  What LLMs can do blows my mind daily. There might be some insufferable hype atm, but gees, the math and engineering behind LLMs is incredible, and it's not done yet - they're still improving from more compute alone, not even factoring in architecture discoveries and innovations!

                                                                                                                                                                                  • raincole 21 minutes ago

                                                                                                                                                                                    > it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.

                                                                                                                                                                                    This is such a ridiculous sentence.

                                                                                                                                                                                    GPT-4 now looks like any other chatbot because the technology advanced so the other chatbots are smarter now as well. Somehow the author is trying to twist this as a bad thing.

                                                                                                                                                                                  • photochemsyn an hour ago

                                                                                                                                                                                    Of course no corporate executive can be taken at their word, unless that word is connected to a legally binding contract, and even then, the executive may try to break the terms of the contract, and may have political leverage over the court system which would bias the result of any effort to bring them to account.

                                                                                                                                                                                    This is not unusual - politicians cannot be taken at their word, government bureaucrats cannot be taken at their word, and corporate media propagandists cannot be taken at their word.

                                                                                                                                                                                    The fact that the vast majority of human beings will fabricate, dissemble, lie, scheme, manipulate etc. if they see a real personal advantage from doing so is the entire reason the whole field of legally binding contract law was developed.

                                                                                                                                                                                    • richrichie an hour ago

                                                                                                                                                                                      He seems like any other tech “evangelical” to me.

                                                                                                                                                                                      • nottorp an hour ago

                                                                                                                                                                                        The time to stop taking him seriously was when he started his "fear AI, give me the monopoly" campaign.

                                                                                                                                                                                        • fnordpiglet an hour ago

                                                                                                                                                                                          While I agree anyone taking sam Altman at his word is and always was a fool, this opinion piece by a journalism major at a journalism school giving his jaded view of technology is the tired trope that is obsessed with the fact reality in the present is always reality in the present. The fact I drive a car that’s largely - if not entirely - autonomous in highly complex situations, is fueled by electricity alone, using a super computer to play music from an almost complete back catalog of everything released at my voices command, on my way to my final cancer treatment for a cancer that ten years ago was almost always fatal, while above me constellations of satellites cooperate via lasers to provide global high speed wireless internet being deployed by dozens upon dozens of private rocket launches as we prepare the final stretch towards interplanetary spaceships, over which computers can converse in true natural language with clear human voices with natural intonation…. Well. Sorry, I don’t have to listen to Sam Altman to see we live in a magical era of science fiction.

                                                                                                                                                                                          The most laughable part of the article is where they point at the fact that in the past TWO YEARS we haven’t gone from “OMG we’ve achieved near perfect NLP” to “Deep thought tell us the answer to life the universe and everything” as some sort of huge failure is patently absurd. If you took Altman at his word on that one, you probably also scanned your eye ball for fake money. The truth though is that the rate of change in the products his company is making is still breath taking - the text to speech tech in the latest advanced voice release (recognizing it’s not actually text to speech but something profoundly cooler, but that’s lost on journalism majors teaching journalism majors like the author) puts to shame the last 30 years of TTS. This alone would have been enough to have a fairly significant enterprise selling IVR and other software.

                                                                                                                                                                                          When did we go from enthralled by the rate of progress to bored that it’s not fast enough? That what we dream and what we achieve aren’t always 1:1 but that’s still amazing? I get that when we put down the devices and switch off the noise we are still bags of mostly water, our back hurts, we aren’t as popular as we wish we were, our hair is receding, maybe we need invisiline but flossing that tooth every day is easier and cheaper, and all the other shit that makes life much less glamorous than they sold us in the dot com boom, or nanotech, etc, as they call out in the article.

                                                                                                                                                                                          But the dot com boom did succeed. When I started at early Netscape no one used the internet. We spun the stories of the future this article bemoans to our advantage. And it was messier than the stories in the end. But now -everyone- uses the internet for everything. Nanotechnology permeates industry, science, tech, and our every day life. But the thing about amazing tech that sounds so dazzling when it’s new is -it blends into the background- if it truly is that amazingly useful. That’s not a problem with the vision of the future. It’s the fact that the present will never stop being the present and will never feel like some illusory gauzy vision you thought it might be. But you still use dot coms (this journalism major assessment of tech was published on a dot com and we are responding on a dot com) and still live in a world powered by nanotechnology, and AI promised in TWO YEARS is still mind boggling to anyone who is thinking clearly about what the goal posts for NLP and AI were five years ago.

                                                                                                                                                                                          • cyanydeez 2 hours ago

                                                                                                                                                                                            Better headline : It's too late to stop taking Sam Altman at his word

                                                                                                                                                                                            See same with Elon Musk.

                                                                                                                                                                                            Money turns genius to smooth brained egomaniacal idiots. See same with Steve Jobs

                                                                                                                                                                                            • surgical_fire 2 hours ago

                                                                                                                                                                                              They were never geniuses. They were just rich assholes propped up by other rich assholes.

                                                                                                                                                                                              "It's too late to stop conflating wealth with intelligence"

                                                                                                                                                                                              • golergka an hour ago

                                                                                                                                                                                                Regardless of personal qualities, for some reason these people have achieved great things for themselves and humanity, whereas countless competitors, including many other rich assholes, have not.

                                                                                                                                                                                                • surgical_fire 32 minutes ago

                                                                                                                                                                                                  > achieved great things for themselves and humanity

                                                                                                                                                                                                  For themselves? Absolutely.

                                                                                                                                                                                                  For humanity? Perhaps we have wildly different ideas of what is good for humanity.

                                                                                                                                                                                                  • lor_louis an hour ago

                                                                                                                                                                                                    Luck and money.

                                                                                                                                                                                                • dimgl 2 hours ago

                                                                                                                                                                                                  I'm not sure this is the same situation... SpaceX just began a mission to save stranded scientists in space. And Starlink has legitimate uses.

                                                                                                                                                                                                  • OKRainbowKid 44 minutes ago

                                                                                                                                                                                                    That doesn't make Musk any less of an egomaniacal idiot in my eyes.

                                                                                                                                                                                                  • jijijijij 19 minutes ago

                                                                                                                                                                                                    How could they not. The word `wealth` or idea of "money" is completely misleading here. It's cancerous accumulation of resources and influence. They are completely detached from consequential reality. The human brain has not evolved to thrive under conditions of total, unconditional material abundance. People struggle to moderate sugar intake, imagine unlimited access to everything. And it's an inherently amoral existence leading to the necessity of unhinged internal models of the world to justify continuation and reward. Their sense of self-efficacy derailed in zero-g. Listen to them talk about fiction... They literally can't tell the price of a banana, how can they possibly get any meaningful story told? All that is left is the aesthetics and mechanical exterior of narration. How can there be love or friendship with normal people grounding you? You could make everyone you ever met during your lifetime a millionaire, while effectively changing nothing for yourself. Nobody can be this rich and not lose touch with common shared reality.

                                                                                                                                                                                                    Billionaires are shameful for the collective, they should be shameful to everyone of us. They are fundamentally most unfit for leadership. They are evidence of civilizatory failure, the least we can do is not idolize them.

                                                                                                                                                                                                    • throwgfgfd25 27 minutes ago

                                                                                                                                                                                                      Jobs was a sort of cracked genius and a very imperfect human who wanted to be a better human. Money didn't make him worse, or better. It didn't really change him at all on a personal level. It didn't even make him more confident, because he was always that. Look back through anecdotes about him in his life and he's just the same guy, all the time.

                                                                                                                                                                                                      Even the stories I heard about him from one of his indirect reports back in the pre-iCEO "Apple is still fucked, NeXT is a distracted mess" era were just like stories told about him from the dawn of Apple and in the iPhone era.

                                                                                                                                                                                                      Musk and Altman are opportunists. Musk appears to be a maligant narcissist. Neither seem in a rush to be better humans.

                                                                                                                                                                                                      • imjonse 2 hours ago

                                                                                                                                                                                                        Are you seriously suggesting Altman is a genius? Or Musk for that matter?

                                                                                                                                                                                                        • api 2 hours ago

                                                                                                                                                                                                          Money removes social feedback. You end up surrounded with bobble heads telling you how genius you are… because they want your money. This is terrible for human psychology. It’s almost like a kind of solitary confinement — solitary in the sense that you are utterly deprived of meaningful rich human contact.

                                                                                                                                                                                                          • aomix 2 hours ago

                                                                                                                                                                                                            I think about this comparison sometimes https://x.com/Merman_Melville/status/1088527693757349888?lan...

                                                                                                                                                                                                            "Being a billionaire must be insane. You can buy new teeth, new skin. All your chairs cost 20,000 dollars and weigh 2,000 pounds. Your life is just a series of your own preferences. In terms of cognitive impairment it's probably like being kicked in the head by a horse every day"

                                                                                                                                                                                                            Solitary confinement is a great comparison. But also not existing in the same reality is 99.99% of the population must really warp you too.

                                                                                                                                                                                                            • yownie 2 hours ago

                                                                                                                                                                                                              it's this more than anything else I wish the general public would understand, those same bobble heads that surround celebrities and eventually warp all sense of costs of common everyday items. We often only see the end result of this when so-and-so celebrity declares bankruptcy and the masses cheer.

                                                                                                                                                                                                              In reality they've been vampire sucked dry by close family / friends / salesmen for years and didn't know it.

                                                                                                                                                                                                              • throwaway42668 2 hours ago

                                                                                                                                                                                                                Sam Altman is not a hapless victim at the mercy of the isolating effects of his financial success.

                                                                                                                                                                                                                He was an opportunistic, amoral sociopath before he was rich, and the system he reaps advantage from strongly selects for hucksters of that particular ilk more than anything else.

                                                                                                                                                                                                                He's just another Kalanick, Neumann, Holmes or Bankman-Fried.

                                                                                                                                                                                                            • DemocracyFTW2 2 hours ago

                                                                                                                                                                                                              > Last week, CEO Sam Altman published an online manifesto titled “The Intelligence Age.” In it, he declares that the AI revolution is on the verge of unleashing boundless prosperity and radically improving human life.

                                                                                                                                                                                                              /s

                                                                                                                                                                                                              • kopirgan 14 minutes ago

                                                                                                                                                                                                                I generally don't take anyone other than Leon at his word. \s

                                                                                                                                                                                                                • bediger4000 2 hours ago

                                                                                                                                                                                                                  This seems like a more general problem with journalistic practices. Journalists don't want to inject their own judgements into articles, which is admirable, and makes sense. So they quote people exactly. Quoting exactly means that bad actors can inject falsehoods into articles.

                                                                                                                                                                                                                  I don't have any suggestions on how to solve this. Everything I can think of has immediate large flaws.

                                                                                                                                                                                                                  • MailleQuiMaille 2 hours ago

                                                                                                                                                                                                                    >Journalists don't want to inject their own judgements into articles, which is admirable, and makes sense.

                                                                                                                                                                                                                    Is it even possible ? Like, don't you know the political inclination of any website/journal you read ? I feel like this search of "The Objective Truth" is just a chimera. I'd rather articles combine pros and cons of everything they discuss tbh

                                                                                                                                                                                                                    • Moto7451 2 hours ago

                                                                                                                                                                                                                      There’s a difference between having natural human biases you try to avoid when reporting by using the usual context sentence (where, when, to whom something was stated), quote, appositive denoting speaker, quote format and writing “this guy is full of crap” or “you really need to believe this person” while cherry picking statements.

                                                                                                                                                                                                                      You can easily find examples of each. Both NYT and Slate are considered left leaning and at the same time have been the professional stomping grounds of right leaning writers that started their own media companies that are not left leaning. Everyone has a bias and they don’t have to work somewhere with that same bias, especially if you just stick to the paper’s style guide. On the same substance the two media outlets present the same topic very differently. Sometimes I appreciate the Slate format for the author’s candor and view being injected (like being pointed on Malcom Gladwell). Sometimes I just want to know the facts as clearly stated as possible (I don’t care if the author doesn’t believe in climate change, tell me what happened when North Carolina flooded).

                                                                                                                                                                                                                      • smogcutter 2 hours ago

                                                                                                                                                                                                                        Yes, you’ve rediscovered the curriculum of a journalism 101 class.

                                                                                                                                                                                                                        • Aeolun 2 hours ago

                                                                                                                                                                                                                          So are you saying there are a lot of journalists that never studied, or did they just never pay attention in class?

                                                                                                                                                                                                                          Because articles that actually do that are few and far between.

                                                                                                                                                                                                                      • Apreche 2 hours ago

                                                                                                                                                                                                                        It’s possible to insert a few sentences factually accounting for a person’s character without inserting a subjective judgement of character.

                                                                                                                                                                                                                        For example you could say:

                                                                                                                                                                                                                        Joey JoeJoe, billionaire CEO, who notably said horrible things, was convicted of some crimes, and ate three babies, was quoted as saying “machine learning is just so awesome”.

                                                                                                                                                                                                                        There, you didn’t inject a judgement. You accurately quoted the subject. You gave the reader enough contextual information about the person so they know how much to trust or not-trust the quote.

                                                                                                                                                                                                                        • cogman10 2 hours ago

                                                                                                                                                                                                                          This does often happen (depending on the leaning of the newspaper, it's omitted if the figure is someone supported and emphasized otherwise).

                                                                                                                                                                                                                          A major problem, though, is headlines don't and can't carry this context. And those are the things most people read.

                                                                                                                                                                                                                          The best you'll get is "Joey JoeJoe says machine learning is just so awesome" or at best "Joey JoeJoe comments on ML. The 3rd word will blow you away!".

                                                                                                                                                                                                                          • afavour an hour ago

                                                                                                                                                                                                                            Extreme examples are easy. But you can pick and choose which facts to present to the reader to affect the judgement they’re making. It would be trivially easy to paint Bill Gates as either a legendary humanitarian or a ruthless capitalist egotist to someone that’s never heard of him.

                                                                                                                                                                                                                            • secondcoming 2 hours ago

                                                                                                                                                                                                                              "Nelson Mandela, convicted of some crimes, calls for World Peace"

                                                                                                                                                                                                                            • PKop an hour ago

                                                                                                                                                                                                                              Or more likely, the journalists don't know any better and believe the AI hype sold to them and promote it of their own accord.

                                                                                                                                                                                                                              • Dalewyn 2 hours ago

                                                                                                                                                                                                                                A journalist's job is to journal something, exactly like how NTFS keeps a journal of what happens.

                                                                                                                                                                                                                                A journalist doing anything other than journaling is not a journalist.

                                                                                                                                                                                                                                So people getting quoted verbatim is perfectly fine. If the quoted turns out to be a liar, that's just part of the journal.

                                                                                                                                                                                                                                • bee_rider 2 hours ago

                                                                                                                                                                                                                                  I don’t think that’s right. First off, we don’t generally define jobs based on the closest computer analogy (we would be unhappy if the loggers returned with a list of things that happened in the woods, rather than a bunch of wood).

                                                                                                                                                                                                                                  The journalist’s job is to describe what actually is happening, and to provide enough context for readers to understand it. Some bias will inevitably creep in, because they can’t possibly describe every event that has ever happened to their subject. But for example if they are interviewing somebody who usually lies, it would be more accurate to at least include a small note about that.

                                                                                                                                                                                                                                  • Dalewyn 2 hours ago

                                                                                                                                                                                                                                    >The journalist’s job is to describe what actually is happening, and to provide enough context for readers to understand it.

                                                                                                                                                                                                                                    The former is a journalist's job, the latter is the reader's concern and not the journalist.

                                                                                                                                                                                                                                    One of the reasons I consider journalism a cancer upon humanity is because journalists can't just write down "it is 35 degrees celsius today at 2pm", but rather "you won't believe how hot it is".

                                                                                                                                                                                                                                    Just journal down what the hell happens literally and plainly, we as readers can and should figure out the rest. NTFS doesn't interject opinions and clickbait into its journal, and neither should proper journalists.

                                                                                                                                                                                                                                    • tiznow 2 hours ago

                                                                                                                                                                                                                                      I think you might need a chill pill, I've never met a single journalist or editor who would let "you won't believe how hot it is" pass in more than a tweet.

                                                                                                                                                                                                                                      • Dalewyn an hour ago

                                                                                                                                                                                                                                        As a counterexample, I have deep respect for weather forecasters because they are professionally and legally bound to state nothing but the scientific journal at hand.

                                                                                                                                                                                                                                        "Typhoon 14 located 500km south of Tokyo, Japan with a pressure of 960hPa and moving north-northeast at a speed of 30km/h is expected to traverse so-and-so estimated course of travel at 6pm tomorrow."

                                                                                                                                                                                                                                        "Let's go over to Arizona. It's currently 105F in Tuscon, 102F in Yuma, ..."

                                                                                                                                                                                                                                        Brutally to the point, the readers are left to process that information as appropriate.

                                                                                                                                                                                                                                        Journalists do not do this, and they should if they claim to be journalists.

                                                                                                                                                                                                                                        • tiznow an hour ago

                                                                                                                                                                                                                                          >they are professionally and legally bound to state nothing but the scientific journal at hand

                                                                                                                                                                                                                                          In America, just about every meteorologist editorializes the weather to a degree. There's nothing scientific about telling me "it's a great night for baseball" (great for the fans? Pitchers? Hitters?) or "don't wash your car just yet" but I will never stop hearing those. I don't, and the public doesn't seem to think that infringes on journalistic standards, because the information is still presented. Maybe this is different than what you mean -- if you're talking about a situation where journalists intentionally created the full context and pushed the information to the side, obviously that is undesirable.

                                                                                                                                                                                                                                          I will add that weather as a "news product" actually gains quite a fair bit from presenter opinion, and news is a product above all.

                                                                                                                                                                                                                                      • bee_rider an hour ago

                                                                                                                                                                                                                                        The journalist is making a product for the reader in the best case, their job is to help the reader. The second example you mention is a typical example of clickbait journalism, where the journalist has betrayed the reader and is trying to steal their attention, because they actually serve advertisers.

                                                                                                                                                                                                                                        But the first example is not very useful either. That journalist could be replaced by a fully automated thermometer. Or weather stations with an API. Context is useful: “It is 35 degrees Celsius, and we’re predicting that it will stay sunny all day” will help you plan your day. “It is 35 degrees Celsius today, finishing off an unseasonably warm September” could provide a little info about the overall trend in the weather this year.

                                                                                                                                                                                                                                        I don’t see any particular reason that journalists should follow your definition, which you seem to have just… made up?

                                                                                                                                                                                                                                        • Dalewyn an hour ago

                                                                                                                                                                                                                                          >your definition, which you seem to have just… made up?

                                                                                                                                                                                                                                          See: https://www.merriam-webster.com/dictionary/journal

                                                                                                                                                                                                                                          Specifically noun, senses 2B through 2F.

                                                                                                                                                                                                                                          I expect journalists to record journals and nothing more nor nothing less, not editorials or opinion pieces which are written by authors or columnists or whatever.

                                                                                                                                                                                                                                          • bee_rider an hour ago

                                                                                                                                                                                                                                            Dictionary similarity is not how people get their job descriptions. If you want to just pick a similar word from the dictionary, why are journalists sharing this stuff? Journals are typically private, after all. If someone read your journal, you might be annoyed, right?

                                                                                                                                                                                                                                            Or, from your definition, apparently:

                                                                                                                                                                                                                                            > the part of a rotating shaft, axle, roll, or spindle that turns in a bearing

                                                                                                                                                                                                                                            I don’t think these journalists rotate much at all!

                                                                                                                                                                                                                                            A better definition is one of… journalism.

                                                                                                                                                                                                                                            https://www.britannica.com/topic/journalism

                                                                                                                                                                                                                                            journalism, the collection, preparation, and distribution of news and related commentary and feature materials through such print and electronic media as […]

                                                                                                                                                                                                                                            That said, I don’t think an argument from definition is all that good anyway. These definitions are descriptive, not prescriptive. Journalism is a profession, they do what they do for the public good. If you think that it would be better for the field of journalism to produce a contextless log of events, defend that idea in and of itself, rather than leaning on some definition.

                                                                                                                                                                                                                                            • vundercind 23 minutes ago

                                                                                                                                                                                                                                              Why favor a definition of “journalist” that approximately nobody else uses? It seems like it would just make it hard to communicate.

                                                                                                                                                                                                                                          • ks2048 an hour ago

                                                                                                                                                                                                                                            You're describing a microphone / voice recorder, not a journalist.

                                                                                                                                                                                                                                            There are of course places you can go to get raw weather data, but a journalist might put it in context of what else is going on, interview farmers or climatologists about the situation, etc.

                                                                                                                                                                                                                                            There are lots of kinds of journalism, but maybe most important is investigative journalism. They are literally doing an investigation - reading source material, actively seeking out the right people to interview and asking them right questions, following the leads to more information.

                                                                                                                                                                                                                                        • soared 2 hours ago

                                                                                                                                                                                                                                          Thats.. not what journalism means. I don’t know where you got that definition, but I can’t find anything similar. Processing and displaying information is a huge part of journalism - ie assessing what is truth or fiction and communicating each as such. Wikipedia: > A journalist is a person who gathers information in the form of text, audio or pictures, processes it into a newsworthy form and disseminates it to the public. This is called journalism.

                                                                                                                                                                                                                                          • zmgsabst 2 hours ago

                                                                                                                                                                                                                                            You’re injecting that “ie” — Wikipedia doesn’t say it as such.

                                                                                                                                                                                                                                            They’re describing collating and you’re describing evaluating.

                                                                                                                                                                                                                                            • Dalewyn 2 hours ago

                                                                                                                                                                                                                                              And to add, evaluating is the responsibility of the reader.

                                                                                                                                                                                                                                              If you're also tasking "journalists" to evaluate for you, you aren't a reader and they aren't journalists. You're just a dumb terminal getting programs (others' opinions) installed and they are influencers.

                                                                                                                                                                                                                                          • llamaimperative 2 hours ago

                                                                                                                                                                                                                                            You’re describing a PR representative. Simply the decision of what to cover is inherently selective and driven by an individual’s and a culture’s priorities.

                                                                                                                                                                                                                                            • i80and 2 hours ago

                                                                                                                                                                                                                                              That sounds more like stenography than anything else

                                                                                                                                                                                                                                              • booleandilemma an hour ago

                                                                                                                                                                                                                                                Sounds like you would be a bad journalist.

                                                                                                                                                                                                                                                • bediger4000 an hour ago

                                                                                                                                                                                                                                                  > A journalist's job is to journal something, exactly like how NTFS keeps a journal of what happens.

                                                                                                                                                                                                                                                  Your choice of metaphor points out problems with your definition. Avid Linux users will be immediately biased against what you wrote, true though it may be, because you assumed that NTFS is the predominant, or even good example of journaling file systems.