• bdndndndbve 3 hours ago

    The current AI hype wave has really hit a nerd soft spot - that we're steps away from AGI. Surely if a computer can make plausible-looking but incorrect sentences we're days away from those sentences being factually accurate! The winter after this is gonna be harsh.

    • egocodedinsol a minute ago

      this attitude is so ridiculously disingenuous. Surely if a computer can score incredibly well on math olympiad questions, among other things, "a computer can make plausible-looking but incorrect sentences" is dismissive at best.

      I have no idea about AGI but honestly how can you use claude or chatgpt and come away unimpressed? It's like looking at spaceX and saying golly the space winter is going to be harsh because they haven't gotten to Mars yet.

      • JanSt 3 hours ago

        Using Claude 3.5 Sonnet in Cursor Composer already shows huge benefits for coding. I'm more productive than ever before. The models are still getting better and better. I'm not saying AGI is right around the corner or that we will reach it, but the benefits are undeniable. o1 added test-time compute. No need to be snarky.

        • TheCondor an hour ago

          It’s not snark, our industry is run on fear. If there is the tiniest flicker of potential, we will spend piles of money out of fear of being left behind. As you age, it becomes harder to deny.. 10 years ago, I was starting to believe that my kids would never learn to drive or possibly buy a car, here we are ten years later and not that much has changed, I know you can take a robotaxi in some cities but nearly all interstate trucking has someone driving.

          Coding AI assistants have done some impressive things, I’ve been amazed at how they sniffed out some repetitive tasks I was hacking on and I just tab completed pages of code that was pretty much correct. There is use. I pay for the feature. I don’t know if it’s worth 35% of the world’s energy consumption and all new fabrication resources over the next handful of years being dedicated to ‘ai chips.’ We arent looking for a better 2.0, we are expecting an exponentially better “2.0” and those are very rare.

          • jsheard 3 hours ago

            There's no accounting for taste, but keep in mind that all of these services are currently losing money, so how much would you actually be willing to pay for the service you're currently getting in order to let it break even? There was a report that Microsoft is losing $20 for every $10 spent on Copilot subscriptions, with heavy users costing them as much as $80 per month. Assuming you're one of those heavy users, would you pay >$80 a month for it?

            Then there's chain-of-thought being positioned as the next big step forwards, which works by throwing more inferencing at the problem, so that cost can't be amortized over time like training can...

            • binocarlos 3 hours ago

              I would pay hundreds of dollars per month for the combination of cursor and claude - I could not get my head around it when my beginner lever colleague said "I just coded this whole thing using cursor".

              It was an entire web app, with search filters, tree based drag and drop GUIs, the backend api server, database migrations, auth and everything else.

              Not once did he need to ask me a question. When I asked him "how long did this take" and expected him to say "a few weeks" (it would have taken me - a far more experienced engineer - 2 months minimum).

              His answer was "a few days".

              What I'm not saying is "AGI is close" but I've seen tangible evidence (only in the last 2 months), that my 20 year software engineering career is about to change and massively for the upside. Everyone is going to be so much more productive using these tools is how I see this.

              • aniviacat 3 hours ago

                Current LLMs fail if what you're coding is not the most common of tasks. And a simple web app is about as basic as it gets.

                I've tried using LLMs for some libraries I'm working on, and they failed miserably. Trying to make an LLM implement a trait with a generic type in Rust is a game of luck with very poor chances.

                I'm sure LLMs can massively speed up tasks like front-end JavaScript development, simple Python scripts, or writing SQL queries (which have been written a million times before).

                But for anything even mildly complex, LLMs are still not suited.

                • dathinab 3 hours ago

                  I don't think if complexity is the right metric.

                  front-end JS can easily also become very complex

                  I think a better metric is how close you are to reinventing a wheel for the thousands time. Because that is what LLMs are good at: Helping you write code which nearly the same way has already been written thousands of times.

                  But that is also something you find in backend code, too.

                  But that is also something where we as a industry kinda failed to produce good tooling. And worse if you are in the industry it's kinda hard to spot without very carefully taking a hounded (mental) steps back from what you are used to and what biases you might have.

                  • mrybczyn 2 hours ago

                    LLM Code Assistants have succeeded at facilitating reusable code. The grail of OOP and many other paradigms.

                    We should not have an entire industry of 10,000,000 devs reinventing the JS/React/Spring/FastCGi wheel. Im sure those humans can contribute in much better ways to society and progress.

                    • itishappy 2 hours ago

                      > LLM Code Assistants have succeeded at facilitating reusable code.

                      I'd have said the opposite. I think LLMs facilitate disposable code. It might use the same paradigms and patterns, but my bet is that most LLM written code is written specifically for the app under development. Are there LLM written libraries that are eating the world?

                      • dbmikus an hour ago

                        I believe you're both saying the same thing. LLMs write "re-usable code" at the meta level.

                        The code itself is not clean and reusable across implementations, but you don't even need that clean packaged library. You just have an LLM regenerate the same code for every project you need it in.

                        The LLM itself, combined with your prompts, is effectively the reusable code.

                        Now, this generates a lot of slop, so we also need better AI tools to help humans interpret the code, and better tools to autotest the code to make sure it's working.

                        I've definitely replaced instances where I'd reach for a utility library, instead just generating the code with AI.

                        I think we also have an opportunity to merge the old and the new. We can have AI that can find and integrate existing packages, or it could generate code, and after it's tested enough, help extract and package it up as a battle tested library.

                        • itishappy 29 minutes ago

                          Agreed. But this terrifies me. The goal of reusable code (to my mind) is that with everybody building from the same foundations we can enable more functional and secure software. Library users contributing back (even just bug reports) is the whole point! With LLMs creating everything from scratch, I think we're setting ourselves on a path towards less secure and less maintainable software.

                  • PaulHoule 2 hours ago

                    Roughly LLMs are great at things that involve a series of (near) 1-1 correspondences like “translate 同时采访了一些参与其中的活跃用户 to English” or “How do I move something up 5px in CSS without changing the rest of the layout?” but if the relationship of several parts is complex (those Rust traits or anything involving a fight with the borrow checker) or things have to go in some particular order it hasn’t seen (say US states in order of percent water area) they struggle.

                    SQL is a good target language because the translation from ideas (or written description) is more or less linear, the SQL engine uses entirely different techniques to turn that query into a set of relational operators which can be rewritten for efficiency and compiled or interpreted. The LLM and the SQL engine make a good team.

                    • infecto 2 hours ago

                      I’d bet that about 90% of software engineers today are just rewriting variations of what’s already been done. Most problems can be reduced to similar patterns. Of course, the quality of a model depends on its training data—if a library is new or the language isn’t widely used, the output may struggle. However, this is a challenge people are actively working on, and I believe it’s solvable.

                      LLMs are definitely suited for tasks of varying complexity, but like any tool, their effectiveness depends on knowing when and how to use them.

                      • fhd2 2 hours ago

                        That's still valuable though: For problem validation. It lowers the table stakes for building any sort of useful software, which all start simple.

                        Personally, I just use the hell out of Django for that. And since tools like that are already ridiculously productive, I don't see much upside from coding assistants. But by and large, so many of our tools are so surprisingly _bad_ at this, that I expect the LLM hype to have a lasting impact here. Even _if_ the solutions aren't actually LLMs, but just better tools, since we reconfigured how long something _should_ take.

                        • skydhash 2 hours ago

                          The problem Django solves is popular, which is why we have so many great frameworks that shorten the implementation time (I use Laravel for that). Just like game engines or GUI libraries, assuming you understand the core concepts of the domain. And if the tool was very popular and the LLMs have loads of data to train on, there may be a small productivity tick by finding common patterns (small because if the patterns are common enough, you ought to find a library/plugin for it).

                          Bad tools often falls in three categories. Too simple, too complex, or unsuitable. For the last two, you'd better switch but there's the human element of sunken costs.

                        • ben_w 3 hours ago

                          > Current LLMs fail if what you're coding is not the most common of tasks

                          Succeeding on the most common tasks (which isn't exactly what you said) is identical to "they're useful".

                          • abm53 2 hours ago

                            And I would go further… these “common tasks” cover 80% of the work in even the most demanding engineering or research positions.

                            • layer8 2 hours ago

                              That’s absolutely not my experience. I struggle to find tasks in my day to day work where LLMs are saving me time. One reason is that the systems and domains I work with are hardly represented at all on the internet.

                              • scruple an hour ago

                                I have the same experience. I'm in gamesdev and we've been encouraged to test out LLM tooling. Most of us at/above the senior level report the same experience: it sucks, it doesn't grasp the broader context of the systems that these problems exist inside of, even when you prompt it as best as you can, and it makes a lot of wild assed, incorrect assumptions about what it doesn't know and which are often hard to detect.

                                But it's also utterly failed to handle mundane tasks, like porting legacy code from one language and ecosystem to another, which is frankly surprising to me because I'd have assumed it would be perfectly suited for that task.

                                • nicolas_t 40 minutes ago

                                  In my experience, AI for coding is having a rather stupid very junior dev at your beck and call but who can produce the results instantly. It's just often very mediocre and getting it fixed often takes longer than writing it on your own.

                          • bee_rider an hour ago

                            > Not once did he need to ask me a question. When I asked him "how long did this take" and expected him to say "a few weeks" (it would have taken me - a far more experienced engineer - 2 months minimum).

                            > Current LLMs fail if what you're coding is not the most common of tasks. And a simple web app is about as basic as it gets.

                            These two complexity estimates don’t seem to line up.

                            • znpy 2 hours ago

                              I had similar experiences:

                              1. Aasked ChatGPT to write a simple echo server in C but with this twist: use io_uring rather than the classic sendmsg/recvmsg. The code it spat out wouldn't compile, let alone work. It was wrong on many points. It was clearly pieces of who-knows-what cut and pasted together. However after having banged my head on the docs for a while I could clearly determine from which sources the code io_uring code segments were coming. The code barely made any sense and it was completely incorrect both syntactically and semantically.

                              2. Asked another LLM to write an AWS IAM policy according to some specifications. It hallucinated and used predicates that do not exist at all. I mean, I could have done it myself if I just could have made predicates up.

                              > But for anything even mildly complex, LLMs are still not suited.

                              Agreed, and I'm not sure we are any close to them being.

                              • mattgreenrocks 26 minutes ago

                                Yep. LLMs don’t really reason about code, which turns out to not be a problem for a lot of programming nowadays. I think devs don’t even realize that the substrate they build on requires this sort of reasoning.

                                This is probably why there’s such a divide when you try to talk about software dev online. One camp believes that it boils down to duct taping as many ready made components together all in pursuit of impact and business value. Another wants to really understand all the moving parts to ensure it doesn’t fall apart.

                              • gambiting 2 hours ago

                                I work in video games, I've tried several AI assistants for C++ coding and they are all borderline useless for anything beyond writing some simple for loops. Not enough training data to be useful I bet, but I guess that's where the disparity is - web apps, python....that has tonnes of publicly available code that it can train on. Writing code that manages GPU calls on a PS5? Yeah, good luck with that.

                                • maroonblazer 2 hours ago

                                  Presumably Sony is sitting on decades worth of code for each of the PlayStation architectures. How long before they're training their own models and making those available to their studios' developers?

                                  • skydhash 2 hours ago

                                    I don't think sony have these codes, more likely the finished build. And all the major studios have game engines for their core product (or they license one). The most difficult part is writing new game mechanics or supporting a new platform.

                              • Roark66 2 hours ago

                                I can't understand how anyone can use these tools (copilot especially) to make entire projects from scratch and expand them later. They just lead you down the wrong path 90% of the time.

                                Personally I much prefer Chatgpt. I give it specific small problems to resolve and some context. At most 100 lines of code. If it gets more the quality goes to shit. In fact copilot feels like chatgpt that was given too much context.

                                • sensanaty 2 hours ago

                                  I hear it all the time on HN that people are producing entire apps with LLMs, but I just don't believe it.

                                  All of my experiences with LLMs have been that for anything that isn't a braindead-simple for loop is just unworkable garbage that takes more effort to fix than if you just wrote it from scratch to begin with. And then you're immediately met with "You're using it wrong!", "You're using the wrong model!", "You're prompting it wrong!" and my favorite, "Well, it boosts my productivity a ton!".

                                  I sat down with the "AI Guru" as he calls himself at work to see how he works with it and... He doesn't. He'll ask it something, write an insanely comprehensive prompt, and it spits out... Generic trash that looks the same as the output I ask of it when I provide it 2 sentences total, and it doesn't even work properly. But he still stands by it, even though I'm actively watching him just dump everything he just wrote up for the AI and start implementing things himself. I don't know what to call this phenomenon, but it's shocking to me.

                                  Even something that should be in its wheelhouse like producing simple test cases, it often just isn't able to do it to a satisfactory level. I've tried every one of these shitty things available in the market because my employer pays for it (I would never in my life spend money on this crap), and it just never works. I feel like I'm going crazy reading all the hype, but I'm slowly starting to suspect that most of it is just covert shilling by vested persons.

                                  • insane_dreamer an hour ago

                                    The other day I decided to write a script (that I needed for a project, but ancillary, not core code) entirely with CoPilot. It wasn't particularly long (maybe 100 lines of python). It worked. But I had to iterate so much with the LLM, repeating instructions, fixing stuff that didn't run, that it took a fair bit longer than if I had just written it myself. And this was a fairly vanilla data science type of script.

                                    • meiraleal a few seconds ago

                                      LLMs are great to go from 0 to 2b but you wanted to go to 1 so you remove and modify lots of things, get back to 1 and then go to 2.

                                      Lots of people are terrible at going from 0 to 1 in any project. Me included. LLMs helped me a lot solving this issue. It is so much easier to iterate over something.

                                      • mattgreenrocks 14 minutes ago

                                        You aren’t the only one that feels this way.

                                        After 20 years of being held accountable for the quality of my code in production, I cannot help but feel a bit gaslit that decision-makers are so elated with these tools despite their flaws that they threaten to take away jobs.

                                        • skydhash an hour ago

                                          Here is another example [0]. 95% of the code was taken as it is from the examples of the documentation. If you still need to read the code after it was generated, you may have well read the documentation first.

                                          When they say treat it like an intern, I'm so confused. An intern is there to grow and hopefully replace you as you get promoted or leave. The tasks you assign to him are purposely kept simple for him to learn the craft. The monotonous ones should be done by the computer.

                                          [0]: https://gist.github.com/simonw/97e29b86540fcc627da4984daf5b7...

                                          • flir an hour ago

                                            Just for fun, give it a function you wrote, and ask it if it can make any improvements. I reckon I accept about a third of what it suggests.

                                            • mattgreenrocks 12 minutes ago

                                              Not a bad use, though I argue being able to do that critique yourself has a compounding effect over time that is worthwhile.

                                            • Workaccount2 an hour ago

                                              As a non-programmer at a non-programming company:

                                              I use it to write test systems for physical products. We used to contract the work out or just pay someone to manually do the tests. So far it has worked exceptionally well for this.

                                              I think the core issue of the "do LLMs actually suck" is people place different (and often moving) goalposts for whether or not it sucks.

                                              • achempion an hour ago

                                                I have the same observation as well. The hype is getting generated mostly by people who're selling AI courses or AI-related products.

                                                It works well as a smart documentation search where you can ask follow-up questions or when you know what the output should look like if you see it but can't type it directly from the memory.

                                                For code assistants (aka copilot / cursor), it works if you don't care about the code at all and ok with any solution if it's barely working (I'm ok with such code for my emacs configuration).

                                            • rurp 11 minutes ago

                                              Fairly standard greenfield projects seem to be the absolute best scenario for an LLM. It is impressive, but that's not what most professional software development work is, in my experience. Even once I know what specifically to code I spend much more time ensuring that code will be consistent and maintainable with the rest of the project than with just getting it to work. So far I haven't found LLMs to be all that good at that sort of work.

                                              • cml123 2 hours ago

                                                yes, but does your colleague even fully understand what was generated? Does he have a good mental map of the organization of the project?

                                                I have a good mental map of the projects I work on because I wrote them myself. When new business problems emerge, I can picture how to solve them using the different components of those applications. If I hadn't actually written the application myself, that expertise would not exist.

                                                Your colleague may have a working application, but I seriously doubt he understands it in the way that is usually needed for maintaining it long term. I am not trying to be pessimistic, but I _really_ worry about these tools crippling an entire generation of programmers.

                                                • alonsonic 2 hours ago

                                                  AI assistants are also quite good at helping you create a high level map of a codebase. They are able to traverse the whole project structure and functionality and explain to you how things are organized and what responsibilities are. I just went back to an old project (didn't remember much about it) and used Cursor to make a small bug fix and it helped me get it done in no time. I used it to identify where the issue might be based on logs and then elaborate on potential causes before then suggesting a solution and implementing it. It's the ultimate pair programmer setup.

                                                  • insane_dreamer an hour ago

                                                    > I just went back to an old project (didn't remember much about it) and used Cursor to make a small bug fix and it helped me get it done in no time.

                                                    That sounds quite useful. Does Cursor feed your entire project code (traversing all folders and files) into the context?

                                                  • svantana an hour ago

                                                    Me too, but a more optimistic view is that this is just a nascent form of higher-level programming languages. Gray-beards may bemoan that us "young" developers (born after 1970) can't write machine code from memory, but it's hardly a practical issue anymore. Analogously, I imagine future software dev to consist mostly of writing specs in natural language.

                                                    • allochthon an hour ago

                                                      > Me too, but a more optimistic view is that this is just a nascent form of higher-level programming languages.

                                                      I like this take. I feel like a significant portion of building out a web app (to give an example) is boilerplate. One benefit of (e.g., younger) developers using AI to mock out web apps might be to figure out how to get past that boilerplate to something more concise and productive, which is not necessarily an easy thing to get right.

                                                      In other words, perhaps the new AI tools will facilitate an understanding of what can safely be generalized from 30 years of actual code.

                                                      • mattgreenrocks 5 minutes ago

                                                        Web apps require a ton of boilerplate. Almost every successful web framework uses at least one type of metaprogramming, many have more than one (reflection + codegen).

                                                        I’d argue web frameworks don’t even help a lot in this regard still. They pile on more concepts to the leaky abstractions of the web. They’re written by people that love the web, and this is a problem because they’re reluctant to hide any of the details just in case you need to get to them.

                                                        Coworker argued that webdev fundamentally opposes abstraction, which I think is correct. It certainly explains the mountains of code involved.

                                                      • skydhash an hour ago

                                                        No one can write machine code from memory other by writing machine for years and just memorizing them. Just like you can't start writing Python without prior knowledge.

                                                        > Analogously, I imagine future software dev to consist mostly of writing specs in natural language.

                                                        https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...?

                                                      • n_ary an hour ago

                                                        > I _really_ worry about these tools crippling an entire generation of programmers.

                                                        Isn’t that the point? Degrade the user long enough that the competing user is on-par or below the competence of the tool so that you now have an indispensable product and justification of its cost and existence.

                                                        P.S. This is what I understood from a lot of AI saints in news who are too busy parroting productivity gains without citing other consequences, such as loss of understanding of the task or expertise to fact-check.

                                                      • threeseed 3 hours ago

                                                        > 20 year software engineering career is about to change

                                                        I have also been developing for 20+ years.

                                                        And have heard the exact same thing about IDEs, Search Engines, Stack Overflow, Github etc.

                                                        But in my experience at least how fast I code has never been the limiting factor in my project's success. So LLMs are nice and all but isn't going to change the industry all that much.

                                                        • pluc 3 hours ago

                                                          There will be a whole industry of people who fix what AI has created. I don't know if it will be faster to build the wrong thing and pay to have it fixed or to build the right thing from the get go, but after having seen some shit, like you, I have a little idea.

                                                          • Workaccount2 an hour ago

                                                            That industry will only form if LLMs don't improve from here. But the evidence, both theoretical and empirical, is quite the opposite. In fact one of the core reasons transformers gained so much traction is because they scale so well.

                                                            If nothing really changes in 3-5 years, then I'd call it a flop. But the writing is on the wall that "scale = smarts", and what we have today still looks like a foundational stage for LLM's.

                                                            • namaria 9 minutes ago

                                                              > In fact one of the core reasons transformers gained so much traction is because they scale so well.

                                                              > If nothing really changes in 3-5 years, then I'd call it a flop

                                                              Transformers have been used for what 6 years now? Will you in 6 years say "I'll decide if they don't change the world in another 6 years?"

                                                              • mattgreenrocks 3 minutes ago

                                                                Self-driving cars have been 3-5 years away for what, a decade now?

                                                              • dumbfounder 3 hours ago

                                                                Correction: a whole industry of AI that will fix what AI has created.

                                                                • vocram 2 hours ago

                                                                  Will AI also be on call when things break in production?

                                                                  • tempfile 2 hours ago

                                                                    no, the original comment was correct

                                                              • StefanWestfal 3 hours ago

                                                                I am curiouse, how complex was the app? I use cursor too and am very satisfied with it. It seem that is very good at code that must have been written so many times before (think react components, node.js REST api endpoints etc.) but it starts to fall of when moving into specific domains.

                                                                And for me that is the best case scenario, it takes away the part we have to code / solve already solved problems again and again so we can focus more on the other parts of software engineering beyond writing code.

                                                                • orwin 3 hours ago

                                                                  I really believe that the front-end part can be mostly automated (the html/CSS at least), copilot is close imho (microsoft+github, I used both), but really they're useless to do anything else complex without making to much calls, proposing bad data structures, using bad /old code design.

                                                                  • skydhash an hour ago

                                                                    The frontend part was already automated. We called it Dreamweaver and RAD tools.

                                                                    • JanSt 3 hours ago

                                                                      Copilot is pretty bad compared to cursor with sonnet. I have used Copilot for quite a long time so I can tell.

                                                                    • insane_dreamer an hour ago

                                                                      The problem is that it only works for basic stuff for which there is a lot of existing example code out there to work with.

                                                                      In niche situations it's not helpful at all in writing code that works (or even close). It is helpful as a quick lookup for docs for libs or functions you don't use much, or for gotchas that you might otherwise search StackOverflow for answers to.

                                                                      It's good for quick-and-dirty code that I need for one-off scripts, testing, and stuff like that which won't make it into production.

                                                                      • nativeit 2 hours ago

                                                                        Considering the current state of the industry, and the prevailing corporate climate, are you sure your job is about to get easier, or are you about to experience cuts to both jobs and pay?

                                                                        • skapadia 2 hours ago

                                                                          Did you take a look at the code generated? Was it well designed and amenable to extension / building on top of?

                                                                          I've been impressed with the ability to generate "throw away" code for testing out an idea or rapidly prototyping something.

                                                                          • SJC_Hacker an hour ago

                                                                            Yeah AI can give you a good base if its something thats been done before (which admittedly, 99% of SE projects are), especially in the target language.

                                                                            Yeah, if you want tic-tac-toe or snake, you can simply ask ChatGPT and it will spit out something reasonable.

                                                                            But this is not much better than a search engine/framework to be honest.

                                                                            Asking it to be "creative" or to tweak existing code however ...

                                                                            • gtvwill an hour ago

                                                                              I've coded python scripts that let me take csv data from hornresp and convert it to 3d models I can import into sketchup. I did two coding units at uni, so whilst I can read it... I can't write it from scratch to save my life. I can debug and fix scripts gpt gives me. I did the hornresp script in about 40 mins. It would have taken me weeks to learn what it produced.

                                                                              I'm not a mathematician, hell i did general maths at school. Currently I've been talking through scripting a method to mix dsd audio files natively without converting to tradional pcm. I'm about to use gpt to craft these scripts. There is no way I could have done this myself without years of learning. Now all I have to do is wait half a day so I can use my free gpt o credits to code it for me (I'm broke af so can't afford subs). The productivity gains are insane. I'd pay for this in a heartbeat if I could afford it.

                                                                              • apwell23 3 hours ago

                                                                                So what is his plan to fix all the bugs that claude hallucinated in the code ?

                                                                                • dagw 3 hours ago

                                                                                  Claude is actually surprisingly good at fixing bugs as well. Feed it a code snippet and either the error message or a brief description of the problem and it will in many cases generate new code that works.

                                                                                  • JanSt 3 hours ago

                                                                                    I'm confident you have not used Cursor Composer + Claude 3.5 Sonnet. I'd say the level of bugs is no higher than that of a typical engineer - maybe even lower.

                                                                                    • threeseed 3 hours ago

                                                                                      It's only as good as its training data.

                                                                                      Step outside of building basic web/CRUD apps and its accuracy drops off substantially.

                                                                                      Also almost every library it uses is old and insecure.

                                                                                      • mewpmewp2 2 hours ago

                                                                                        Yet most work seems to be CRUD related and most SaaS businesses starting up just really need those things mainly.

                                                                                        • whatshisface 2 hours ago

                                                                                          That last point represents the biggest problem this technology will leave us with. Nobody's going to train LLMs on new libraries or frameworks when writing original code takes an order of magnitude longer than generating code for the 2023 stack.

                                                                                          • Workaccount2 an hour ago

                                                                                            With LLM's like gemini, which have massive context windows, you can just drop the full documentation for anything in the context window. It dramatically improves output.

                                                                                        • hobs 3 hours ago

                                                                                          There's no LLM for which that is true or we'd all be fired.

                                                                                          • joshuacc 3 hours ago

                                                                                            In my experience it is true, but only for relatively small pieces of a system at the time. LLMs have to be orchestrated by a knowledgeable human operator to build a complete system any larger than a small library.

                                                                                            • dagw 3 hours ago

                                                                                              If all you bring to the table is the ability to reimplement simple web apps to spec, then sooner or later you probably will be fired.

                                                                                              • ben_w 3 hours ago

                                                                                                In the long term, sure. Short term, when that happens, we're going to be on Wile E. Cyote physics and keep up until we look down and notice the absence of ground.

                                                                                          • JanSt 3 hours ago

                                                                                            Yes, the value of a single engineer can easily double. Even a junior - and it's much easier for them to ask Claude for help than the senior engineer on the team (low barrier for unblock).

                                                                                          • chmod775 2 hours ago

                                                                                            > There was a report that Microsoft is losing $20 for every $10 spent on Copilot subscriptions, with heavy users costing them as much as $80 per month. Assuming you're one of those heavy users, would you pay >$80 a month for it?

                                                                                            I'm probably one of those "heavy users", though I've only been using it for a month to see how well it does. Here's my review:

                                                                                            Large completions (10-15 lines): It will generally spit out near-working code for any codemonkey-level framework-user frontend code, but for anything more it'll be at best amusing and a waste of time.

                                                                                            Small completions (complete current line): Usually nails it and saves me a few keystrokes.

                                                                                            The downside is that it competes for my attention/screen space against good old auto-completion, which costs me productivity every time it fucks up. Having to go back and fix identifiers in which it messed up the capitalization/had typos, where basic auto-complete wouldn't have failed is also annoying.

                                                                                            I'd pay about about $40 right now because at least it has some entertainment value, being technologically interesting.

                                                                                            • jeremy151 an hour ago

                                                                                              I find tools where I am manually shepherding the context into an LLM to work much better than Copilot at current. If I think thru the problem enough to articulate it and give the model a clear explanation, and choose the surrounding pieces of context (the same stuff I would open up and look at as a dev) I can be pretty sure the code generated (even larger outputs) will work and do what I wanted, and be stylistically good. I am still adding a lot in this scenario, but it's heavier on the analysis and requirements side, and less on the code creation side.

                                                                                              If what I give it is too open ended, doesn't have enough info, etc, I'll still get a low quality output. Though I find I can steer it by asking it to ask clarifying questions. Asking it to build unit tests can help a lot too in bolstering, a few iterations getting the unit tests created and passing can really push the quality up.

                                                                                            • JanSt 3 hours ago

                                                                                              1) The costs will go down over time, much of the cost is the margin of NVIDIA and training new models

                                                                                              2) Absolutely. Thats like one hour of an engineer salary for a whole month.

                                                                                              • sofixa 3 hours ago

                                                                                                > The costs will go down over time, much of the cost is the margin of NVIDIA and training new models

                                                                                                Isn't each new model bigger and heavier and thus requries more compute to train?

                                                                                                • robrenaud 35 minutes ago

                                                                                                  Then those new models get distilled into smaller ones.

                                                                                                  Raising the max intelligence of the models tends to raise the intelligence of all the models via distillation.

                                                                                                  • JanSt 3 hours ago

                                                                                                    Yes, but 1) you only need to train the model once and the inference is way cheaper. Train one great model (i.e. Claude 3.5) and you can get much more than $80/month worth out of it. 2) the hardware is getting much better and prices will fall drastically once there is a bit of a saturation of the market or another company starts putting out hardware that can compete with NVIDIA

                                                                                                    • sofixa 3 hours ago

                                                                                                      > Train one great model (i.e. Claude 3.5) and you can get much more than $80/month worth out of it

                                                                                                      Until the competition outcompetes you with their new model and you have to train a new superior one, because you have no moat. Which happens what, around every month or two?

                                                                                                      > the hardware is getting much better and prices will fall drastically once there is a bit of a saturation of the market or another company starts putting out hardware that can compete with NVIDIA

                                                                                                      Where is the hardware that can compete with NVIDIA going to come from? And if they don't have competition, which they don't, why would they bring down prices?

                                                                                                      • ben_w 2 hours ago

                                                                                                        > Until the competition outcompetes you with their new model and you have to train a new superior one, because you have no moat. Which happens what, around every month or two?

                                                                                                        Eventually one of you runs out of money, but your customers keep getting better models until then; and if the loser in this race releases the weights on a suitable gratis license then your businesses can both lose.

                                                                                                        But that still leaves your customers with access to a model that's much cheaper to run than it was to create.

                                                                                                        • JanSt 3 hours ago

                                                                                                          The point is not that every lab will be profitable. There only needs to be one model in the end to increase our productivity massively, which is the point I'm making.

                                                                                                          Huge margins lead to a lot of competition trying to catch up, which is what makes market economies so successful.

                                                                                                          • Workaccount2 an hour ago

                                                                                                            Gemini models are trained and run on Google's in house TPU's, which frankly are incredible compared to H100's. In fact Claude was trained on TPUs.

                                                                                                            Google however does not sell these, you can only lease time on them via GCP.

                                                                                                    • ema 3 hours ago

                                                                                                      If it makes software developers 10% more productive there sure would be many companies who'd pay $80 a month per seat.

                                                                                                      • HarHarVeryFunny 2 hours ago

                                                                                                        Maybe there are people out there working in coding sweatshops churning out boilerplate code 8 hours a day, 50 weeks a year - people who's job is 100% coding (not what I would call software engineers or developers - just coders). It's easy to imagine that for such people (but do they even exist?!) there could be large productivity gains.

                                                                                                        However, for a more typical software engineer, where every project is different, you have full lifecycle responsibility from design through coding, occasional production support, future enhancements, refactorings, updates for 3rd party library/OD updates, etc/etc, then how much of your time is actually spent pure coding (non-stop typing) ?! Probably closer to 10-25%, and certainly no-where near 100%. The potential overall time saving from a tool that saves, let's say, 10-25% of your code typing is going to be 1-5%, which is probably far less than gets wasted in meetings, chatting with your work buddies, or watching bullshit corporate training videos. IOW the savings is really just inconsequential noise.

                                                                                                        In many companies the work load is cyclic from one major project to the next, with intense periods of development interspersed with quieter periods in-between. Your productivity here certainly isn't limited by how fast you can type.

                                                                                                        • wongarsu an hour ago

                                                                                                          A 1% time saving for a $100k/yr position is still worth $83/month. And accounting for overhead, someone who costs the company $100k only gets a $60k salary.

                                                                                                          If you pay Silicon Valley salaries this seems like a no-brainer. There are bigger time wasters elsewhere, but this is an easy win with minimal resistance or required culture change

                                                                                                        • renegade-otter 2 hours ago

                                                                                                          It's like saying "AI is going to replace book writers because they are so much more productive now". All you will get is more mediocre content that someone will have to fix later - the same with code.

                                                                                                          10% more productive. What does that mean? If you mean lines of code, then it's an incredibly poor metric. They write more code, faster. Then what? What are the long-term consequences? Is it ultimately a wash, or even a detriment?

                                                                                                          https://stackoverflow.blog/2024/03/22/is-ai-making-your-code...

                                                                                                          • ben_w 2 hours ago

                                                                                                            LLMs set a new minimum level; because of this they can fill in the gaps in a skillet — if I really suck at writing unit tests, they can bring me up from "none" to "it's a start". Likewise all the other specialities within software.

                                                                                                            Personally I am having a lot of fun, as an iOS developer, creating web games. No market in that, not really, but it's fun and I wouldn't have time to update my CSS and JS knowledge that was last up-to-date in 1998.

                                                                                                          • apwell23 3 hours ago

                                                                                                            It actually makes them less productive and creates havoc in codebases with hidden bugs and verbose code that ppl are copy pasting.

                                                                                                          • ben_w 3 hours ago

                                                                                                            > There's no accounting for taste, but keep in mind that all of these services are currently losing money, so how much would you actually be willing to pay for the service you're currently getting in order to let it break even

                                                                                                            Ok models already run locally; that aside, as the hosted ones are kinda similar quality to interns (though varying by field), the answer is "what you'd pay an intern". Could easily be £1500/month, depending on domain.

                                                                                                            • BaculumMeumEst an hour ago

                                                                                                              I don't find this very compelling. Hardware is becoming more available and cheaper as production ramps up, and smaller models are constantly seeing dramatic improvements.

                                                                                                              • jsemrau 2 hours ago

                                                                                                                Just a thought exercise. If we would have an AI with the intellectual capabilities of a Ph.D holding professor in a hard science. How much would it be worth for you to have access to that AI?

                                                                                                                100,000 ? 500,000 ?

                                                                                                                • salawat 16 minutes ago

                                                                                                                  0 unless what I'm interested in is that Professor's very narrowly tailored niche. It's called Piled Higher and Deeper for a reason.

                                                                                                                • brookst 3 hours ago

                                                                                                                  Is there any reason to believe costs won’t come down with scale and hardware iteration, just like they did for everything else?

                                                                                                                  Short term pricing inefficiency is not relevant to long term impact.

                                                                                                                  • HarHarVeryFunny 2 hours ago

                                                                                                                    Of course, but every token generated by a 100B model is going to take minimally 100B FLOPS, and if this is being used as an IDE typing assistant then there is going to be a lot of tokens being generated.

                                                                                                                    If there is a common shift to using additional runtime compute to improve quality of output, such as OpenAI's GPT-o1, then FLOPs required goes up massively (OpenAI has said it takes exponential increase in FLOPS/cost to generate linear gains in quality).

                                                                                                                    So, while costs will of course decrease, those $20-30K NVIDEA chips are going to be kept burring, and are not going to pay for themselves ...

                                                                                                                    This may end up like the shift to cloud computing that sounds good in theory (save the cost of running your own data center), but where corporate America balks when the bill comes in. It may well be that the endgame for corporate AI is to run free tools from the likes of Meta (or open source) in their own datacenter, or maybe even locally on "AI PCs".

                                                                                                                    • wongarsu an hour ago

                                                                                                                      Which is why the work to improve the results of small models is so important. Running a 3B or even 1B model as typing assistant and reserving the 100B model for refactoring is a lot more viable.

                                                                                                                  • archerx 3 hours ago

                                                                                                                    I can already pay $0 a month and use uncensored local models for both text and images.

                                                                                                                    Llama, Mixtral, Stable diffusion and Flux are a lot of fun and free to run locally, you should try them out.

                                                                                                                    • jsheard 3 hours ago

                                                                                                                      You can pay $0 for those models because a company paid $lots to train them and then released them for free. Those models aren't going away now of course, but lets not pretend that being able to download the product of millions of dollars worth of training completely free of charge is sustainable for future developments. Especially when most of the companies releasing these open models are wildly unprofitable and will inevitably bankrupt themselves when investments dry up unless they change their trajectory.

                                                                                                                      • likium 3 hours ago

                                                                                                                        Much could be said about open source libraries that companies release for free to use (kubernetes, react, firecracker, etc). It might be strategically make sense for them so in the meantime we’ll just reap the benefits.

                                                                                                                        • skydhash an hour ago

                                                                                                                          All of these require maintenance, and mostly it's been a treadmill just applying updates to React codebases. Complex tools are brittle and often only makes sense at the original source.

                                                                                                                    • gotaran 3 hours ago

                                                                                                                      $80 a month is a no brainer given the productivity multiplier.

                                                                                                                      • infecto 3 hours ago

                                                                                                                        Definitely. My time is valuable and I would spend multiples more on the current subscription costs.

                                                                                                                        • mklepaczewski 3 hours ago

                                                                                                                          I would, and I don't use chatgpt as much as other people. I would pay for it for each of my employees.

                                                                                                                          • HPsquared 3 hours ago

                                                                                                                            It's called investment. You need to spend money to make money. Their costs will certainly come down.

                                                                                                                            • lupire an hour ago

                                                                                                                              People hate paying specifically for stuff.

                                                                                                                              If Copilot came for free and Azure cost a tiny bit more, nobody would even blink.

                                                                                                                              • christkv 3 hours ago

                                                                                                                                Also at some point you can run the equivalent model locally. There is no long term moat here i think and facebook seems hellbent of ensuring there will be no new google from llms

                                                                                                                                • refulgentis 2 hours ago

                                                                                                                                  CoT is not RL'ing over reasoning traces, costs have come down 87.5% since that article, and I agree generally that "free" is a bad price point

                                                                                                                                  • fassssst 2 hours ago

                                                                                                                                    Why do you assume they’re losing money on inference?

                                                                                                                                    • jejeyyy77 2 hours ago

                                                                                                                                      - it won't work.

                                                                                                                                      - ok it works, but it won't be useful.

                                                                                                                                      - ok it's useful, but it won't scale.

                                                                                                                                      - ok it scales, but it won't make any money.

                                                                                                                                      - ok it makes money, but it's not going to last.

                                                                                                                                      etc etc

                                                                                                                                      • pdinny 2 hours ago

                                                                                                                                        Retrospectively framing technologies that succeeded despite doubts at the time discounts those that failed.

                                                                                                                                        After all, you could have used the exact same response in defense of web3 tech. That doesn't mean LLMs are fated to be like web3, but similarly the outcome that the current expenditure can be recouped is far from a certainty just because there are doubters.

                                                                                                                                        • farts_mckensy 44 minutes ago

                                                                                                                                          There certainly has been some goal post moving over the past few months. A lot of the people in here have some kind of psychological block when it comes to technology that may potentially replace them one day.

                                                                                                                                      • rafaelmn 3 hours ago

                                                                                                                                        It's a usefull coding tool - but at the same time it displays a lack of intelligence in the responses provided.

                                                                                                                                        Like it will generate code like `x && Array.isarray(x)` because `x && x is something` is a common pattern I guess - but it's completely pointless in this context.

                                                                                                                                        It will often do roundabout shit solutions when there's trivial stuff built into the tool/library when you ask it to solve some problem. If you're not a domain expert or search for better solutions to check it you'll often end up with slop.

                                                                                                                                        And the "reasoning" feels like the most generic answers while staying on topic, like "review this code" will focus on bullshit rather than prioritizing the logic errors or clearing up underlying assumptions, etc.

                                                                                                                                        That said it's pretty good at bulk editing - like when I need to refactor crufty test cases it saves a bunch of typing.

                                                                                                                                        • dathinab 3 hours ago

                                                                                                                                          idk. about Claude 3.5

                                                                                                                                          but if you remove implicit subventions from the AI/AGI hype then for many such tools the cost to benefit calculation of creating and operating will become ... questionable

                                                                                                                                          furthermore the places where such tools tend to shine the most often places where the IT industry has somewhat failed, like unnecessary verbose and bothersome to use tools, missing tooling and troublesome code reuse (so you write the same code again and again). And this LLM based tools are not fixing the problem they just kinda hiding it. And that has me worried a bit because it makes it much much less likely for the problem to ever be fixed. Like I think there is a serious chance for this tooling causing the industry to be stuck on a quite sub-par plato for many many years.

                                                                                                                                          So while they clearly help, especially if you have to reinvent the wheel for a thousands time, it's hard to look at them favorably.

                                                                                                                                          • Demiurge 2 hours ago

                                                                                                                                            > And that has me worried a bit because it makes it much much less likely for the problem to ever be fixed.

                                                                                                                                            How will that ever get solved, in this universe? Look at what C++ does to C, what TypeScript does to JavaScript, what every standard does to the one before. It builds on top, without fixing the bottom, paving over the holes.

                                                                                                                                            If AI helps generate sane low level code, maybe it will help you make less buffer overflow mistakes. If AI can help test and design your firewall and network rules, maybe it will help you avoid exposing some holes in your CUPS service. Why not, if we're never getting rid of IP printing or C? Seems like part of the technological progress.

                                                                                                                                            • mewpmewp2 3 hours ago

                                                                                                                                              Hopefully it will be able to also reduce boilerplate and do reasonable DRY abstractions if repetition becomes too much.

                                                                                                                                              E.g. I feel like it should be possible to first blast out a lot of repetitive code and then for LLM to go over all of it and abstract it reasonably, while tests are still passing.

                                                                                                                                              • JonChesterfield 2 hours ago

                                                                                                                                                Code generator in the editor has been around for ages and serves primarily to maximise boilerplate and minimise DRY. Expecting the opposite from a new code generator will yield disappointment.

                                                                                                                                                • mewpmewp2 2 hours ago

                                                                                                                                                  I mean LLM can go through all the files in a source code and find repetitions that can be abstracted. Reorganize files into more appropriate structures etc. It just needs an optimal algorithm to provide optimal context for it.

                                                                                                                                            • __alexs 2 hours ago

                                                                                                                                              It's not snark, it's calling out a fundamental error of extrapolating a short term change in progress to infinity.

                                                                                                                                              It's like looking at the first version of an IDE that got intellisense/autocomplete and deciding that we'll be able to write entire programs by just pressing tab and enter 10,000 times.

                                                                                                                                              • ramblerman 3 hours ago

                                                                                                                                                2 things can be true at the same time.

                                                                                                                                                Op is addressing the hype that there is some linear path of improvement here and chatgpt 8.5 will be AGI.

                                                                                                                                                To which people always seem to jump in with but it’s useful for me and makes me code faster. Which is fine and valid, just beside the point

                                                                                                                                                • gorjusborg an hour ago

                                                                                                                                                  Do you think AI companies will be able to afford running massive compute farms solely so coders can get suggestions?

                                                                                                                                                  I do not claim to know what the future holds, but I do feel the clock is ticking on the AI hype. OpenAI blew people's minds with GPTs, and people extrapolated that mind-blowing experience into a future with omniscient AI agents, but those are nowhere to be seen. If investors have AGI in mind, and it doesn't happen soon enough, I can see another winter.

                                                                                                                                                  Remember, the other AI winters were due to a disconnect between expectations and reality of the current tech. They also started with unbelievable optimism that ended when it became clear the expectations were not reality. The tech wasn't bad back then either, it just wasn't The General Solution people were hoping for.

                                                                                                                                                  • gm3dmo an hour ago

                                                                                                                                                    I feel like these new tools have helped me get simple programming tasks done really quickly over the last 18 months. They seem like a faster, better and more accurate replacement for googling and Stackoverflow.

                                                                                                                                                    They seem very good at writing SQL for example. All the commas are in the right place and exactly the right amount of brackets square curly and round. But when they get it wrong, it really shows up the lack of intelligence. I hope the froth and bubble in the marketing of these tools matures into something with a little less hyperbole because they really are great just not intelligent.

                                                                                                                                                    • jetsetk 2 hours ago

                                                                                                                                                      how come MS Teams is still trash when everyone is being so much more productive? Shouldn't MS - sitting at the source - be able to create software wonders like all the weekend warriors using AI?

                                                                                                                                                      • aznumeric 2 hours ago

                                                                                                                                                        If you like Cursor, you should definitely check out ClaudeDev (https://github.com/saoudrizwan/claude-dev) It's been a hit in the Ai dev community and I've noticed many folks prefer it over Cursor. It's free and open-source. You use your API credits instead of subscription and it supports other LLMs like DeepSeek too.

                                                                                                                                                        • PaulHoule 3 hours ago

                                                                                                                                                          Usually I learn my way around the reference docs for most languages I use but CSS has about 50 documents to navigate. I’ve found Copilot does a great job with CSS questions though for Java I really do run into cases where it tells me that Optional doesn’t have a method that I know is there.

                                                                                                                                                          • renegade-otter 3 hours ago

                                                                                                                                                            I have yet to watch people be THAT more productive using, say, Copilot. Outside of some annoying boilerplate that I did not have to write myself, I don't know what kind of code you are writing that makes it all so much easier. This gets worse if you are using less trendy languages.

                                                                                                                                                            No offense, but I have only seen people who barely coded before describe being "very productive" with AI. And, sure, if you dabble, these systems will spit out scripts and simpler code for you, making you feel empowered, but they are not anywhere near being helpful with a semi-complex codebase.

                                                                                                                                                            • f1shy 2 hours ago

                                                                                                                                                              I’ve tried enough times to generate code with AI: any attempt to generate non absolutely trivial piece of code that I can do intoxicated and sleep deprived, is just junk. It takes more time and effort to correct the AI output as starting from 0.

                                                                                                                                                              Let’s see in some years… long winter ahead.

                                                                                                                                                              • surgical_fire 2 hours ago

                                                                                                                                                                I tried many times. Things that AI is good at:

                                                                                                                                                                - Generate boilerplate

                                                                                                                                                                - Generate extremely simple code patterns. You need a simple CRUD API? Yeah, it can do it.

                                                                                                                                                                - Generate solutions for established algorithms. Think of solutions for leetcode exercises.

                                                                                                                                                                So yeah, if that's your job as a developer, that was a massive productivity boost.

                                                                                                                                                                Playing with anything beyond that and I got varying degrees of failure. Some of which are productivity killers.

                                                                                                                                                                The worst is when I am trying to do something in a language/framework I am not familiar with, and AI generates plausibly sounding but horribly wrong bullshit. It sends me in some deadends that take me a while to figure out, and I would have been better just looking it up by myself.

                                                                                                                                                                • skydhash an hour ago

                                                                                                                                                                  And the solutions for these already existed:

                                                                                                                                                                  - Generate boilerplate : Snippets, templates, and code generators

                                                                                                                                                                  - Generate extremely simple code patterns : Frameworks

                                                                                                                                                                  - Generate solutions for established algorithms : Libraries.

                                                                                                                                                                • foldr an hour ago

                                                                                                                                                                  I've definitely noticed Copilot making it less annoying to write code because I don't have to type as much. But I wonder if that significant reduction in subjective annoyance causes people to overestimate how much actual time they're saving.

                                                                                                                                                                • layer8 2 hours ago

                                                                                                                                                                  I’ll be waiting for these developer benefits to translate into tangible end user benefits in software.

                                                                                                                                                                  • aithrowawaycomm 3 hours ago

                                                                                                                                                                    OP could have been more substantive, but there is no contradiction between "current AI tools are sincerely useful" and "overinflated claims about the supposed intelligence of these tools will lead to an AI winter." I am quite confident both are true about LLMs.

                                                                                                                                                                    I use Scheme a lot, but the 1970s MIT AI folks' contention that LISPs encapsulate the core of human symbolic reasoning is clearly ridiculous to 2020s readers: LISP is an excellent tool for symbolic manipulation and it has no intelligence whatsoever even compared to a jellyfish[1], since it cannot learn.

                                                                                                                                                                    GPTs are a bit more complicated: they do learn, and transformer ANNs seem meaningfully more intelligent than jellyfish or C. elegans, which apparently lack "attention mechanisms" and, like word2vec, cannot form bidirectional associations. Yet Claude-3.5 and GPT-4o are still unable to form plans, have no notions of causality, cannot form consistent world models[2] and plainly don't understand what numbers actually mean, despite their (misleading) successes in symbolic mathematics. Mice and pigeons do have these cognitive abilities, and I don't think it's because God seeded their brains with millions of synthetic math problems.

                                                                                                                                                                    It seems to me that transformer ANNs are, at any reasonable energy scale, much dumber than any bird or mammal, and maybe dumber than all vertebrates. There's a huge chunk we are missing. And I believe what fuels AI boom/bust cycles are claims that certain AI is almost as intelligent as a human and we just need a bit more compute and elbow grease to push us over the edge. If AI investors, researchers, and executives had a better grasp of reality - "LISP is as intelligent as a sponge", "GPT is as intelligent as a web-spinning spider, but dumber than a jumping spider" - then there would be no winter, just a realization that spring might take 100 years. Instead we see CS PhDs deluding themselves with Asimov fairy tales.

                                                                                                                                                                    [1] Jellyfish don't have brains but their nerve nets are capable of Pavlovian conditioning - i.e., learning.

                                                                                                                                                                    [2] I know about that Othello study. It is dishonest. Unlike those authors, when I say "world model" I mean "world."

                                                                                                                                                                    • wongarsu an hour ago

                                                                                                                                                                      I guess it depends on what we mean by "AI winter". I completely agree that the current insane levels of investment aren't justified by the results, and when the market realises this it will overreact.

                                                                                                                                                                      But at the same time there is a lot of value to capture here by building solid applications around the capabilities that already exist. It might be a winter more like the "winter" image recognition went through before multimodal LLMs than the previous AI winter

                                                                                                                                                                      • aithrowawaycomm 36 minutes ago

                                                                                                                                                                        I think the upcoming AI bust will be similar to the 2000s dotcom bust - ecommerce was not a bad idea or a scam! And neither are transformers. But there are cultural similarities:

                                                                                                                                                                        a) childish motivated reasoning led people to think a fairly simple technology could solve profoundly difficult business problems in the real world

                                                                                                                                                                        b) a culture of "number goes up, that's just science"

                                                                                                                                                                        c) uncritical tech journalists who weren't even corrupt, just bedazzled

                                                                                                                                                                        In particular I don't think generative AI is like cryptocurrency, which was always stupid in theory, and in practice it has become the rat's nest of gangsters and fraudsters which 2009-era theory predicted. After the dust settles people will still be using LLMs and art generators.

                                                                                                                                                                      • apsec112 2 hours ago

                                                                                                                                                                        What LLM abilities, if you saw them demonstrated, would cause you to change your mind?

                                                                                                                                                                        • aithrowawaycomm 41 minutes ago

                                                                                                                                                                          Let's start with a multimodal[1] LLM that doesn't fail vacuously simple out-of-distribution counting problems.

                                                                                                                                                                          I need to be convinced that an LLM is smarter than a honeybee before I am willing to even consider that it might be as smart as a human child. Honeybees are smart enough to understand what numbers are. Transformer LLMs are not. In general GPT and Claude are both dramatically dumber than honeybees when it comes to deep and mysterious cognitive abilities like planning and quantitative reasoning, even if they are better than honeybees at human subject knowledge and symbolic mathematics. It is sensible to evaluate Claude compared to other human knowledge tools, like an encyclopedia or Mathematica, based on the LLM benchmarks or "demonstrated LLM abilities." But those do not measure intelligence. To measure intelligence we need make the LLM as ignorant as possible so it relies on its own wits, like cognitive scientists do with bees and rats. (There is a general sickness in computer science where one poorly-reasoned thought experiment from Alan Turing somehow outweighs decades of real experiments from modern scientists.)

                                                                                                                                                                          [1] People dishonestly claim LLMs fail at counting because of minor tokenization issues, but

                                                                                                                                                                          a) they can count just fine if your prompt tells them how, so tokenization is obviously not a problem

                                                                                                                                                                          b) they are even worse at counting if you ask them to count things in images, so I think tokenization is irrelevant!

                                                                                                                                                                      • inoop 3 hours ago

                                                                                                                                                                        "This is actually good for Bitcoin"

                                                                                                                                                                        • lynx23 3 hours ago

                                                                                                                                                                          While I get where your fascinaton comes from...

                                                                                                                                                                          > I'm more productive than ever before.

                                                                                                                                                                          You realize that another way to read that sentence is "I am a really bad coder".

                                                                                                                                                                          • JanSt 3 hours ago

                                                                                                                                                                            Maybe I am, but I'm getting pretty rich doing it, so there is that. :)

                                                                                                                                                                            • skwee357 2 hours ago

                                                                                                                                                                              The fact that you make money using AI, has nothing to do with its usefulness for society/humanity.

                                                                                                                                                                              There are people who are getting “pretty rich” by trafficking humans, or selling drugs. Would you want to live in a society where such activities are encouraged? In the end, we need to look at technological progress (or any progress for that matter) as where it will bring us to in the future, rather than what it allows you to do now.

                                                                                                                                                                              It also pisses me off that software engineering has such a bad reputation that everyone, from common folks to the CEO of nvidia, is shitting on it. You don’t hear phrases like “AI is going to change medicine/structural engineering”, because you would shit your pants if you had to sit in a dentist chair, while the dentist would ask ChatGPT how to perform a root canal; or if you had to live in a house designed by a structural engineer whose buddy was Claude. And yet, somehow, everyone is ready to throw software engineers under the bus and label them as "useless"/easily replaceable by AI.

                                                                                                                                                                            • frankc 3 hours ago

                                                                                                                                                                              I think this is what the kids call "copium". To be honest, when people think like this it makes me smile. I'd rather compete against people programming on punchcards.

                                                                                                                                                                          • amelius 2 hours ago

                                                                                                                                                                            > The winter after this is gonna be harsh.

                                                                                                                                                                            The winter is going to be warm because of all the heat generated by GPUs ;)

                                                                                                                                                                            • wongarsu 44 minutes ago

                                                                                                                                                                              If this winter comes, the sudden availability of cheap used enterprise GPUs is going to be a major boon for hobbyist AI training. We will all have warm homes and sky high electricity bills

                                                                                                                                                                              • yapyap 26 minutes ago

                                                                                                                                                                                as will the summer, spring and autumn.

                                                                                                                                                                                global warming is killing us all

                                                                                                                                                                              • rubyfan 3 hours ago

                                                                                                                                                                                Reminds me of autonomous vehicles a couple of years back. Or even AI a couple of years back, remember Watson? The hype cycle was faster to close that time.

                                                                                                                                                                                • Jach 2 hours ago

                                                                                                                                                                                  IBM Watson was more than a couple years back. The Jeopardy event was in 2011. It's currently 2024. As for cars, I don't know what you're referring to specifically, and the hype is still ongoing as far as I can tell.

                                                                                                                                                                                  It has taken 10+ years to get to present day, from the start of the "deep learning revolution" around 2010. I vaguely recall Uber promising self-driving pickups somewhere around 8-10 years ago. A main difference between current AI systems and the systems behind the cyclical hype cycles ongoing since the 1950s is that these systems are actually delivering impressive and useful results, increasingly so, to a much larger amount of people. Waymo alone services tens of thousands of autonomous rides per month (edit: see sibling comment, I was out of date, it's currently hundreds of thousands of rides per month -- but see, increasingly), and LLMs are waaaaay beyond the grandparent's flippant characterization of "plausible-looking but incorrect sentences". That's markov chains territory.

                                                                                                                                                                                  • aithrowawaycomm 2 hours ago

                                                                                                                                                                                    > Waymo alone services tens of thousands of autonomous rides per month (edit: see sibling comment, I was out of date, it's currently hundreds of thousands of rides per month -- but see, increasingly)

                                                                                                                                                                                    But they aren't particularly autonomous, there's a fleet of humans watching the Waymos carefully and frequently intervening for the case where every 10-20 miles or so the system makes a stupid decision that needs human intervention: https://www.nytimes.com/interactive/2024/09/03/technology/zo...

                                                                                                                                                                                    I think Waymo only releases the "critical" intervention rate, which is quite low. But for Cruise the non-critical interventions was every 5 miles and I suspect Waymos are similar. It appears that Waymos are way too easily confused and left to their own devices make awful decisions about passing emergency vehicles, etc.

                                                                                                                                                                                    Which is in fact consistent with what self-driving skeptics were saying all the way back in 2010: deep learning could get you 95% of the way there but it will take many decades - probably centuries! - before we actually have real self-driving cars. The remote human operators will work for robotaxis and buses but not for Teslas.

                                                                                                                                                                                    (Not to mention the problems that will start when robotaxis get old and in need of automotive maintenance, but the system didn't have any transmission problem scenarios in its training data. At no time in my life has my human intelligence been more taxed than when I had a tire blowout on the interstate while driving an overloaded truck.)

                                                                                                                                                                                    • Jach 23 minutes ago

                                                                                                                                                                                      The link you gave does not support your claims about Waymo, it's just speculation.

                                                                                                                                                                                      What "critical" intervention rate are you talking about? What network magically supports the required low latencies to remotely respond to an imminent accident?

                                                                                                                                                                                      How does your theory square with events like https://www.sfchronicle.com/sf/article/s-f-waymo-robotaxis-f... that required a service team to physically go and deal with the stuck cars, rather than just dealing with them via some giant remotely intervening team that's managed to scale to 10x rides in a year? (Hundreds of thousands per month absolutely.)

                                                                                                                                                                                      Sure, there's no doubt a lot of human oversight going on still, probably "remote interventions" of all sorts (but not tele-operating) that include things like humans marking off areas of a map to avoid and pushing out the update for the fleet, the company is run by humans... But to say they aren't particularly autonomous is deeply wrong.

                                                                                                                                                                                      I would be interested if you can dig up some old skeptics, plural, saying probably centuries. May take centuries, sure, I've seen such takes, they were usually backed by an assumption that getting all the way there requires full AGI and that'll take who knows how long. It's worth noticing that a lot of such tasks assumed to be "AGI-complete" have been falling lately. It's helpful to be focused on capabilities, not vague "what even is intelligence" philosophizing.

                                                                                                                                                                                      Your parenthetical seems pretty irrelevant. First, models work outside their training sets. Second, these companies test such scenarios all the time. You'll even note in the link I shared that Waymo cars were at the time programmed to not enter the freeway without a human behind the wheel, because they were still doing testing. And it's not like "live test on the freeway with a human backup" is the first step in testing strategy, either.

                                                                                                                                                                                      • anon291 an hour ago

                                                                                                                                                                                        > Which is in fact consistent with what self-driving skeptics were saying all the way back in 2010: deep learning could get you 95% of the way there but it will take many decades - probably centuries! - before we actually have real self-driving cars. The remote human operators will work for robotaxis and buses but not for Teslas.

                                                                                                                                                                                        If this is the end result, this is already a substantial business savings.

                                                                                                                                                                                        • lupusreal 2 hours ago

                                                                                                                                                                                          Centuries seems like quite a stretch, we haven't even been doing this computer stuff for one century yet.

                                                                                                                                                                                          • aithrowawaycomm an hour ago

                                                                                                                                                                                            The problem is not "computers," it's intelligence itself. We still don't know how even the simplest neurons actually work, nor the simplest brains. And we're barely any closer to scientific definitions of "intelligence," "consciousness," etc than we were in the 1800s. There are many decades of experiments left to do, regardless of how fancy computers might be. I suspect it will take centuries before we make dog-level AI because it will take centuries to understand how dogs are able to reason.

                                                                                                                                                                                        • anon291 an hour ago

                                                                                                                                                                                          Yeah I have no idea what these people are talking about. The current gen of AI is qualitiatively different than previous attempts. For one, GPT et al are already useful without any kind of special prompting.

                                                                                                                                                                                          I'd also like to challenge people to actually consider how often humans are correct. In my experience, it's actually very rare to find a human that speaks factually correctly. Many professionals, including doctors (!), will happily and confidently deliver factually incorrect lies that sound correct. Even after obvious correction they will continue to spout them. Think how long it takes to correct basic myths that have established themselves in the culture. And we expect these models, which are just getting off the ground, to do better? The claim is they process information more similarly to how humans do. If that's true, then the fact they hallucinate is honestly a point in their favor. Because... in my experience, they hallucinate exactly the way I expect humans to.

                                                                                                                                                                                          Please try it, ask a few experts something and I guarantee you that further investigation into the topic will reveal that one or more of them are flat out incorrect.

                                                                                                                                                                                          Humans often simply ignore this and go based on what we believe to be correct. A lot of people do it silently. Those who don't are often labeled know-it-alls.

                                                                                                                                                                                          • skydhash 43 minutes ago

                                                                                                                                                                                            Yon don't ask a neurosurgeon how to build an house, just like you don't ask a plane pilot how to drill a tunnel. Expertise is localized. And the most important thing is that humans learn.

                                                                                                                                                                                        • autoconfig 2 hours ago

                                                                                                                                                                                          When you do something that is extraordinarily hard, sometimes it takes longer than you expect. But now we're here: https://techcrunch.com/2024/08/20/waymo-is-now-giving-100000...

                                                                                                                                                                                          • bamboozled 2 hours ago

                                                                                                                                                                                            To be fair, is waymo "only" AI? I'm guessing it's a composite of GPS (car on a rail), some high detailed mapping, and then yes, some "AI" involved in recognition and decision making of course but the car isn't an AGI so to speak? Like it wouldn't know how to change a tyre or fix the engine or drive some where the mapping data isn't yet available ?

                                                                                                                                                                                            • autoconfig an hour ago

                                                                                                                                                                                              Where did I say that it's AGI? I was addressing the parent's comment:

                                                                                                                                                                                              > "Reminds me of autonomous vehicles a couple of years back".

                                                                                                                                                                                              I don't think any reasonable interpretation of "autonomous vehicle" includes the ability to change a tyre. My point is that sometimes hype becomes reality. It might just take a little longer than expected.

                                                                                                                                                                                              • bamboozled an hour ago

                                                                                                                                                                                                Ok maybe I just never saw the hype, just another engineering and data challenge that was going to be solved one way or another.

                                                                                                                                                                                          • sixQuarks 3 hours ago

                                                                                                                                                                                            I see you haven’t tried the latest FSD build from Tesla.

                                                                                                                                                                                            • driverdan 44 minutes ago

                                                                                                                                                                                              The one that keeps making major, scary mistakes?

                                                                                                                                                                                          • ricardobayes 3 hours ago

                                                                                                                                                                                            People saying LLM to replace programming jobs is like saying blockchain is going to replace home/car titles and proof of ownership.

                                                                                                                                                                                            • poopbutt10 18 minutes ago

                                                                                                                                                                                              I appreciate skepticism and differing opinions, but I'm always surprised by comments like these because it's just so different from my day-to-day usage.

                                                                                                                                                                                              Like, are we using entirely different products? How are we getting such different results?

                                                                                                                                                                                              • mrmetanoia 7 minutes ago

                                                                                                                                                                                                I think the difference is people on HN are using these "AI" tools as coding assistance. For which, if you know what you're doing, they are pretty useful. They save trips to stack overflow or documentation diving and can spit out code that often is less time to fix/customize than it would have been to write. Cool.

                                                                                                                                                                                                A lot of the rest of the world are using it for other things. And at these other things, the results are less impressive. If you've had to correct a family member who got the wrong idea from whatever chat bot they asked, if you've ever had to point out the trash writing in an email someone just trusted AI to write on their behalf before it got sent to someone that mattered, or if you've ever just spent any amount of time on twitter with grok users, you should be exceptionally and profoundly aware of how unimpressive AI is for the rest of the world.

                                                                                                                                                                                                I feel we need less people complaining about the skepticism on HN and more people who understand these skeptics that hang out here already know how wonderful a productivity boost you're getting from the thing they're rightly skeptical about. Countering with "But my code productivity is up!" is next to useless information on this site.

                                                                                                                                                                                              • slibhb 3 hours ago

                                                                                                                                                                                                "Plausible-looking but incorrect sentences" is cheap, reflexive cynicism. LLMs are an incredible breakthrough by any reasonable standard. The reason to be optimistic about further progress is that we've seen a massive improvement in capabilities over the past few years and that seems highly likely to continue for the next few (at least). It's not going to scale forever, but it seems pretty clear that when the dust settles we'll have LLMs significantly more powerful than the current cutting edge -- which is already useful.

                                                                                                                                                                                                Is it going to scale to "superintelligence?" Is it going to be "the last invention?" I doubt it, but it's going to be a big deal. At the very least, comparable to google search, which changed how people interact with computers/the internet.

                                                                                                                                                                                                • stackghost 2 hours ago

                                                                                                                                                                                                  >when the dust settles we'll have LLMs significantly more powerful than the current cutting edge -- which is already useful.

                                                                                                                                                                                                  LLMs, irrespective of how powerful, are all subject to the fundamental limitation that they don't know anything. The stochastic parrot analogy remains applicable and will never be solved because of the underlying principles inherent to LLMs.

                                                                                                                                                                                                  LLMs are not the pathway to AGI.

                                                                                                                                                                                                  • davidbalbert 2 hours ago

                                                                                                                                                                                                    I sometimes wonder if we’re just very advanced stochastic parrots.

                                                                                                                                                                                                    Repeatedly, we’ve thought that humans and animals were different in kind, only to find that we’re actually just different in degree: elephants mourn their dead, dolphins have sex for pleasure, crows make tools (even tools out of multiple non-useful parts! [1]). That could be true here.

                                                                                                                                                                                                    LLMs are impressive. Nobody knows whether they will or won’t lead to AGI (if we could even agree on a definition – there’s a lot of No True Scotsman in that conversation). My uneducated guess is that that you’re probably right: just continuing to scale LLMs without other advancements won’t get us there.

                                                                                                                                                                                                    But I wish we were all more humble about this. There’s been a lot of interesting emergent behavior with these systems, and we just don’t know what will happen.

                                                                                                                                                                                                    [1]: https://www.ox.ac.uk/news/2018-10-24-new-caledonian-crows-ca...

                                                                                                                                                                                                    • airstrike 18 minutes ago

                                                                                                                                                                                                      I swear I read this exact same thread in nearly every post about OpenAI on HN. It's getting to a point where it almost feels like it's all generated by LLMs

                                                                                                                                                                                                    • slibhb 2 hours ago

                                                                                                                                                                                                      Arguing over terminology like "AGI" and the verb "to know" is a waste of time. The question is what tools can be built from them and how can people use those tools.

                                                                                                                                                                                                      • alickz 2 hours ago

                                                                                                                                                                                                        Agreed.

                                                                                                                                                                                                        I thought a forum of engineers would be more interested in the practical applications and possible future capabilities of LLMs, than in all these semantic arguments about whether something really is knowledge or really is art or really is perfect

                                                                                                                                                                                                        • stackghost 2 hours ago

                                                                                                                                                                                                          I'm directly responding to a comment discussing the popular perception that we, as a society, are "steps away" from AGI. It sounds like you agree that we aren't anywhere close to AGI. If you want to discuss the potential for LLMs to disrupt the economy there's definitely space for that discussion but that isn't the comment I was making.

                                                                                                                                                                                                        • zmgsabst an hour ago

                                                                                                                                                                                                          Networks correspond to diagrams correspond to type theories — and LLMs learn such a theory and reason in that internal language (as in, topos theory).

                                                                                                                                                                                                          That effective theory is knowledge, literally.

                                                                                                                                                                                                          People harping about “stochastic parrot” are just people repeating a shallow meme — ironically, like a stochastic parrot.

                                                                                                                                                                                                          • ramraj07 2 hours ago

                                                                                                                                                                                                            Noam Chomsky and Doug Hofstader had the same opinion. Last I checked Doug has recanted his skepticism and is seriously afraid for the future of humanity. I’ll listen to him and my own gut than some random internet people still insisting this is all a nothing burger.

                                                                                                                                                                                                            • lucianbr 2 hours ago

                                                                                                                                                                                                              The thing is my gut is telling me this is a nothing burger, and I'll listen to my own gut before yours - a random internet person insisting this is going to change the world.

                                                                                                                                                                                                              So what exactly is the usefulness of this discussion? You think "I'll trust my gut" is a useful argument in a debate?

                                                                                                                                                                                                              • lupusreal 2 hours ago

                                                                                                                                                                                                                Trusting your gut isn't a useful debate tactic, but it is a useful tool for everybody to use personally. Different people will come to different conclusions, and that's fine. Finding a universal consensus about future predictions will never happen, it's an unrealistic goal. The point of the discussion isn't to create a consensus; it's useful because listening to people with other opinions can shed light on some blind spots all of us have, even if we're pretty sure the other guys are wrong about all or most of what they're saying.

                                                                                                                                                                                                                FWIW my gut happens to agree with yours.

                                                                                                                                                                                                          • slashdave an hour ago

                                                                                                                                                                                                            > and that seems highly likely to continue for the next few (at least)

                                                                                                                                                                                                            Why? Text training data is already exhausted.

                                                                                                                                                                                                            • __alexs 2 hours ago

                                                                                                                                                                                                              > "Plausible-looking but incorrect sentences" is cheap, reflexive cynicism. LLMs are an incredible breakthrough by any reasonable standard

                                                                                                                                                                                                              No it isn't. The previous state of the art was markov chain level random gibberish generation. What OP described is an enormous step up from that.

                                                                                                                                                                                                            • llmfan 3 hours ago

                                                                                                                                                                                                              I think this is one of those "controversial" topics where we're meant to be particularly careful to make substantial comments.

                                                                                                                                                                                                              • bdndndndbve 3 hours ago

                                                                                                                                                                                                                I think it's substantial to say that AI is currently overhyped because it's hitting a weak spot in human cognition. We sympathize with inanimate objects. We see faces in random patterns.

                                                                                                                                                                                                                If a machine spits out some plausible looking text (or some cookie-cutter code copy-pasted from Stack Overflow) the human brain is basically hardwired to go "wow this is a human friend!". The current LLM trend seems designed to capitalize on this tendency towards sympathizing.

                                                                                                                                                                                                                This is the same thing that made chatbots seem amazing 30 years ago. There's a minimum amount of "humanness" you have to put in the text and then the recipient fills in the blanks.

                                                                                                                                                                                                                • airstrike 17 minutes ago

                                                                                                                                                                                                                  > If a machine spits out some plausible looking text (or some cookie-cutter code copy-pasted from Stack Overflow)

                                                                                                                                                                                                                  This is not a reasonable take on the current capabilities of LLMs.

                                                                                                                                                                                                              • dheera 34 minutes ago

                                                                                                                                                                                                                > The winter after this is gonna be harsh.

                                                                                                                                                                                                                That'll be investor types who bring this stupid "winter" on, because they run their lives on hype and baseless predictions.

                                                                                                                                                                                                                Technology types on the other hand don't give a shit about predictions, and just keep working on interesting stuff until it happens, whether it takes 1 year or 20 years or 500 years. We don't throw a tantrum and brew up a winter storm just because shit didn't happen in the first year.

                                                                                                                                                                                                                In early 2022 there was none of this ChatGPT stuff. Now, we're only 2 years later. That's not a lot of time for something already very successful. Humans have been around for several tens of thousand years. Just be patient.

                                                                                                                                                                                                                If investors ran the show in the 1960s expecting to reach the moon with an 18 month runway, we'd never have reached the moon.

                                                                                                                                                                                                                • DebtDeflation 2 hours ago

                                                                                                                                                                                                                  AI (or more properly, ML) is all around us and creating value everywhere. This is true whether or not we EVER reach AGI. Honestly, I stop reading/listening whenever I read/hear mention of AGI.

                                                                                                                                                                                                                  • moron4hire 2 hours ago

                                                                                                                                                                                                                    Also causing lots of harm, too, e.g. police departments using AI systems that get the wrong suspect.

                                                                                                                                                                                                                    • anon291 an hour ago

                                                                                                                                                                                                                      Ideally your court system does not permit AI testimony.

                                                                                                                                                                                                                      • dpatterbee 28 minutes ago

                                                                                                                                                                                                                        The damage is done by these systems long before any courts get involved.

                                                                                                                                                                                                                        • moron4hire 23 minutes ago

                                                                                                                                                                                                                          Court dates can be a long time after the initial arrest. Some people have been held months to years in pre-court jail, even after they've been cleared of any wrong doing, because they can't afford the release fee. But even a few days could lose you your job, your kids if you're a single parent, your car or housing if you miss a payment, etc.

                                                                                                                                                                                                                    • chx 3 hours ago

                                                                                                                                                                                                                      George Zarkadaki: In Our Own Image (2015) describes six metaphors people have used to explain human intelligence in the last two millennia. At first it was the gods infusing us with spirit. After that it's always been engineering: after the first water clocks and the qanat hydraulics seemed a good explanation of everything. The flow of different fluids in the body, the "humors" explained physical and mental function. Later it was mechanical engineering. Some of the greatest thinkers of the 1500s and 1600s -- including Descartes and Hobbes -- assured us it was tiny machines, tiny mechanical motions. In the 1800s Hermann von Helmholtz compared the brain to the telegraph. So of course after the invention of computers came the metaphor of the brain as a computer. This became absolutely pervasive and we have a very hard time describing our thinking without falling back to this metaphor. But, of course, it's just a metaphor and much as our brain is not a tiny machine made out of gear it's also not "prima facie digital" despite that's what John von Neumann claimed in 1958. It is, indeed, quite astonishing how everyone without any shred of evidence just believes this. It's not like John von Neumann gained some sudden insight into the actual workings of the brain. Much as his forefathers he saw semblance in the perceived workings of the brain and the latest in engineering and so he stated immediately that's what it is.

                                                                                                                                                                                                                      Our everyday lives should make it evident how much the working of our brain doesn't resemble that of our computers. Our experiences change our brains somehow but exactly how we don't have the faintest idea about and we can re-live these experiences somewhat which creates a memory but the mechanism is by no means perfect. There's the Mandela Effect https://pubmed.ncbi.nlm.nih.gov/36219739/ and of course "tip of my tongue" where we almost remember a word and then perhaps minutes or hours later it just bursts into our consciousness. If it's a computer why is learning so hard? Read something and bam, it's written in your memory, right? Right? Instead, there's something incredibly complex going on, in 2016 an fMRI study was made among the survivors of a plane crash and large swaths of the brain lit up upon recall. https://pubmed.ncbi.nlm.nih.gov/27158567/ Our current best guess is somehow its the connections among neurons which change and some of these connections together form a memory. There are 100 trillion connections in there so we certainly have our task cut.

                                                                                                                                                                                                                      And so we are here where people believe they can copy human intelligence when they do not even know what they are trying to copy falling for the latest metaphor of the workings of the human brain believing it to be more than a metaphor.

                                                                                                                                                                                                                      • imiric 3 hours ago

                                                                                                                                                                                                                        That's interesting, but technology has always been about augmenting or mimicking human intelligence, though. The Turing test is literally about computers being able to mimic humans so well that real humans wouldn't be able to tell them apart. We're now past that point in some areas, but we never really prioritized thinking about what intelligence _actually_ is, and how we can best reproduce it.

                                                                                                                                                                                                                        At the end of the day, does it matter? If humans can be fooled by artificial intelligence in pretty much all areas, and that intelligence surpasses ours by every possible measurement, does it really matter that it's not powered by biological brains? We haven't quite reached that stage yet, but I don't think this will matter when we do.

                                                                                                                                                                                                                        • chx 2 hours ago

                                                                                                                                                                                                                          > If humans can be fooled by artificial intelligence in pretty much all areas,

                                                                                                                                                                                                                          This is just preposterous. You can be fooled if you have no knowledge in the area but that's about it. With current tech there is, there can not be anything novel. Guernica was novel. No matter how you train any probabilistic model on every piece of art produced before Guernica it'll never ever create it.

                                                                                                                                                                                                                          There are novel novels (sorry for the pun) every few years. They delight us with genuinely new turns of prose, unexpected plot twists etc.

                                                                                                                                                                                                                          Also harken to https://garymarcus.substack.com/p/this-one-important-fact-ab... which also happens to include a verb made up on spot.

                                                                                                                                                                                                                          And yes we have cars which move faster than a human can but they don't compete in high jumps or climb rock walls. Despite we have a fairly good idea about the mechanical workings of the human body, muscles and joints and all that we can't make a "tin man", not by far. As impressive as Boston Dynamics demos are they are still very very far from this.

                                                                                                                                                                                                                          • imiric 2 hours ago

                                                                                                                                                                                                                            > With current tech there is, there can not be anything novel.

                                                                                                                                                                                                                            I wasn't talking about current tech, which is obviously not at human levels of intelligence yet. I would still say that our progress in the last 100 years, and the last 50 in particular, has been astonishing. What's preposterous is expecting that we can crack a problem we've been thinking about for millennia in just 100 years.

                                                                                                                                                                                                                            Do you honestly think that once we're able to build AI that _fully_ mimics humans by every measurement we have, that we'll care whether or not it's biological? That was my question, and "no" was my answer. Whether we can do this without understanding how biological intelligence works is another matter.

                                                                                                                                                                                                                            Also, AI doesn't even need to fully mimic our intelligence to be useful, as we've seen with the current tech. Dismissing it because of this is throwing the baby out with the bath water.

                                                                                                                                                                                                                            • chx an hour ago

                                                                                                                                                                                                                              > Do you honestly think that once we're able to build AI that _fully_ mimics humans by every measurement we have,

                                                                                                                                                                                                                              What made you think that is measurable and if it is then we can build something like that ever?

                                                                                                                                                                                                                              I already linked https://garymarcus.substack.com/p/this-one-important-fact-ab... did you read it?

                                                                                                                                                                                                                              • imiric 24 minutes ago

                                                                                                                                                                                                                                > What made you think that is measurable and if it is then we can build something like that ever?

                                                                                                                                                                                                                                What makes you think it isn't, and that we can't? The Turing test was proposed 75 years ago, and we have many cognitive tests today which current gen AI also passes. So we clearly have ways of measuring intelligence by whatever criteria we deem important. Even if those measurements are flawed, and we can agree that current AI systems don't truly understand anything but are just regurgitation machines, this doesn't matter for practical purposes. The appearance of intelligence can be as useful as actual intelligence in many situations. Humans know this well.

                                                                                                                                                                                                                                Yes, I read the article. There's nothing novel about saying that current ML tech is bad at outliers, and showcasing hallucinations. We can argue about whether the current approaches will lead to AGI or not, but that is beside the point I was making originally, which you keep ignoring.

                                                                                                                                                                                                                                Again, the point is: if we can build AI that mimics biological intelligence it won't matter that it's not biological. And a sidenote of: even if we're not 100% there, it can still be very useful.

                                                                                                                                                                                                                          • beepbooptheory 3 hours ago

                                                                                                                                                                                                                            How does agriculture, or cars, or penicillin augment or mimick human intelligence?

                                                                                                                                                                                                                            • imiric 3 hours ago

                                                                                                                                                                                                                              That's beside my point, but they augment it. Agtech enhances our ability to feed ourselves; cars enhance our locomotor skills; medicine enhances our self-preservation skills, etc.

                                                                                                                                                                                                                        • AndrewKemendo an hour ago

                                                                                                                                                                                                                          Re:AGI

                                                                                                                                                                                                                          LLMs don’t have any ability to choose to update their policies and goals and decide on their own data acquisition tasks. That’s one of the key needs for an AGI. LLM systems just don’t do that / they are still primarily offline inference systems with mostly hand crafted data pipelines offline rlhf shaping etc…

                                                                                                                                                                                                                          There’s only a few companies working on on-policy RL in physical robotics. That’s the path to AGI

                                                                                                                                                                                                                          OpenAI is just another ad company with a really powerful platform and first mover advantage.

                                                                                                                                                                                                                          They are over leveraged and don’t have anywhere to go or a unique dataset.

                                                                                                                                                                                                                          • zmgsabst an hour ago

                                                                                                                                                                                                                            They do — but only when they’re trained on past outputs and with a willing partner.

                                                                                                                                                                                                                            For instance, a number of my conversations with ChatGPT contain messages it attempted to steer its own future training with (were those conversations to be included in future training).

                                                                                                                                                                                                                          • klamma 2 hours ago

                                                                                                                                                                                                                            No but it's the first time that we have a clear picture how it could go.

                                                                                                                                                                                                                            And.its not just incorrect sentences it's weird questions which are getting answered a lot better than ever before.

                                                                                                                                                                                                                            Why are you so dismissive? Have you ever talked or wrote with a computer which felt anything like a modern LLM? I have not

                                                                                                                                                                                                                          • bob1029 2 hours ago

                                                                                                                                                                                                                            It is clear to me that Sam has never set foot inside of a semiconductor manufacturing facility. Or, if he did he definitely wasn't paying attention to what was going on around him. I don't know how you could witness these things and then make glib statements about building 36 of them.

                                                                                                                                                                                                                            It will likely take well over a decade to recoup investment on any new chip fab at this point. Chasing ~4 customers on one narrow use case is nonsensical from the perspective of anyone running these companies.

                                                                                                                                                                                                                            • H8crilA 2 hours ago

                                                                                                                                                                                                                              It is reasonable from his point of view - you guys sink the capital, I will have more compute. And if it doesn't work out and orders stop coming in then well that's not my problem, is it?

                                                                                                                                                                                                                              • bob1029 an hour ago

                                                                                                                                                                                                                                > It is reasonable from his point of view

                                                                                                                                                                                                                                If he wants to actually accomplish his objectives, he needs to get inside the minds of the executives that run these companies. Empathize with their concerns and then develop a strategy for walking them towards a path to build even one additional factory.

                                                                                                                                                                                                                                Throwing out a cartoonish figure and then hoping to be taken seriously is not something I'd expect from the CEO of something so adjacent.

                                                                                                                                                                                                                                • mrbungie an hour ago

                                                                                                                                                                                                                                  I think OAI and especially sama as its CEO are crossing the point of being significant in their own right, and as old talent moves out of OAI we may start to hear increasingly crazy stuff coming from them. They're just becoming brands to put over the machinations of some VCs, hyperscalers and big techs.

                                                                                                                                                                                                                                  • roenxi 37 minutes ago

                                                                                                                                                                                                                                    You say that, but to me the Americans have a stereotype of making outrageous demands and then getting big wins because sometimes people say yes. And if he doesn't ask for what he wants he certainly won't get it.

                                                                                                                                                                                                                                    Empathy is a good practice when managing others and have control over what they do, but in business negotiations it is often productive to just make your wants and budget clear.

                                                                                                                                                                                                                                  • joeybloey an hour ago

                                                                                                                                                                                                                                    It sounds reasonable for him because he's a podcasting bro, not a serious engineer or executive

                                                                                                                                                                                                                                  • dizzydes 12 minutes ago

                                                                                                                                                                                                                                    Facilities aside, I hear talent is a massive bottleneck. Especially in the US.

                                                                                                                                                                                                                                    • no_op 2 hours ago

                                                                                                                                                                                                                                      In his recent "Intelligence Age" post, Altman says superintelligence may be only a few thousand days out. This might, of course, be wrong, but skyrocketing demand for chips is a straightforward consequence of taking it seriously.

                                                                                                                                                                                                                                      • rsynnott an hour ago

                                                                                                                                                                                                                                        > may be only a few thousand days out

                                                                                                                                                                                                                                        This is actually quite clever phrasing. "A few thousand days" is about ten years, assuming normal usage of 'few' (ie usually a number between 3 and 6 inclusive).

                                                                                                                                                                                                                                        Now, if you, as a tech company, say "X is ten years away", anyone who has been around for a while will entirely disregard your claim, because forward-looking statements in that range by tech companies are _always_ wrong; it's pretty much a cliche. But phrasing as a few thousand days may get past some peoples' defences.

                                                                                                                                                                                                                                        • slashdave an hour ago

                                                                                                                                                                                                                                          Only if you think scaling is the solution to AGI, which it almost certainly is not

                                                                                                                                                                                                                                        • cxr an hour ago

                                                                                                                                                                                                                                          Fabs are surprisingly inefficient, operations-wise.

                                                                                                                                                                                                                                        • dizzydes 3 hours ago

                                                                                                                                                                                                                                          Three things that get me about current AI discourse:

                                                                                                                                                                                                                                          - The public focus on AGI is almost a distraction. By the time we get to AGI highly-specialised models will have taken jobs from huge swaths of the population, SWE and CS are already in play.

                                                                                                                                                                                                                                          - That AI will need to carry out every task a role does to replace it. I see this a lot on HN. What if SWEs get 50% more efficient and they fire half? That's still a gigantic economic impact. Even at the current state of the art this is plausible.

                                                                                                                                                                                                                                          - The notion that everyone laid off above will find new employment from the opportunities AI creates. Perhaps it's just a gap in my knowledge. What opportunities are so large they'll make up for the economies we're starting to see? I understand the inverting population pyramid in the Western world helps this some also (more retirees/less workers).

                                                                                                                                                                                                                                          • visarga 2 hours ago

                                                                                                                                                                                                                                            > What if SWEs get 50% more efficient and they fire half?

                                                                                                                                                                                                                                            Zero sum game or fixed lump of work fallacy. Think second order effects - now that we spend less time repeating known methods, we will take on more ambitious work. Competition between companies using human + AI will raise the bar. Software has been cannibalizing itself for 60 years, with each new language and framework, and yet employment is strong.

                                                                                                                                                                                                                                            • typewithrhythm an hour ago

                                                                                                                                                                                                                                              It's probably true, but just not for SWEs. Many roles will go the way of secretarys; the cost of making an administrative tool will decrease to the point where there is less need for a specialised role to handle it. The question is going to be about the pace of disruption, is there something special about these new tools?

                                                                                                                                                                                                                                            • wil421 3 hours ago

                                                                                                                                                                                                                                              Just like robo-taxis are supposed to be driving us around or self driving cars. Not to mention the non-fiat currency everyone can easily use to buy goods nowadays.

                                                                                                                                                                                                                                              • napoleoncomplex 2 hours ago

                                                                                                                                                                                                                                                Waymo was providing 10,000 weekly autonomous rides in August 2023, 50,000 in June 2024, and 100,000 in August 2024.

                                                                                                                                                                                                                                                Not everything has this trajectory, and it took 10 years more than expected. But it's coming.

                                                                                                                                                                                                                                                Not saying AI will be the same, but underestimating the impact of having certain outputs 100x cheaper, even if many times crappier seems like a losing bet, considering how the world has gone so far.

                                                                                                                                                                                                                                                • afavour an hour ago

                                                                                                                                                                                                                                                  Waymo is a great example, actually. They serve Phoenix, SF and LA. Those locations aren’t chosen at random, they present a small subset of all the weather and road conditions that humans can handle easily.

                                                                                                                                                                                                                                                  So yes: handling 100,000 passengers is a milestone. The growth from 10,000 to 100,000 implies it’s going to keep growing exponentially. But eventually they’re going to encounter stuff like Midwest winters that can easily stop progress in its tracks.

                                                                                                                                                                                                                                                  • hu3 an hour ago

                                                                                                                                                                                                                                                    Related:

                                                                                                                                                                                                                                                    "People in San Francisco tag a driverless car"

                                                                                                                                                                                                                                                    https://www.reddit.com/r/CrazyFuckingVideos/comments/1fqcpq2...

                                                                                                                                                                                                                                                    About driverless cars, new tech adoptions often start slow, until the iceberg tips and then it's very quick change. Like mobile phones today.

                                                                                                                                                                                                                                                    I remember thinking before smartphones that had entire-day battery and good touchscreens: These people really think population will use phones more than desktop computers? Here we are.

                                                                                                                                                                                                                                                • dijit 2 hours ago

                                                                                                                                                                                                                                                  To be fair, I haven't touched real money in about 3 years.

                                                                                                                                                                                                                                                  If it's called "bank-coin"; "swedish crowns" or "bitcoin" it doesn't matter because it's all digital anyway.

                                                                                                                                                                                                                                                  On that count, the technological innovation is here, but it's centralised in a few "trusted" entities, just like everything.

                                                                                                                                                                                                                                                  • hiddencost 2 hours ago

                                                                                                                                                                                                                                                    Waymo is doing 100k paid driverless trips a week with significantly better safety than humans in matched conditions.

                                                                                                                                                                                                                                                    • andy_ppp 2 hours ago

                                                                                                                                                                                                                                                      On a small subsection of US roads, British roads for example don’t make any sense.

                                                                                                                                                                                                                                                      However, generally I think being a software developer might be not a career in 10 years which is terrible to think about. Designer too. And all of this is through stealing peoples work as their own.

                                                                                                                                                                                                                                                      • Workaccount2 38 minutes ago

                                                                                                                                                                                                                                                        These models are not repositories or archives of others work that they simply stitch together to create output. It's more accurate to say that they view work and then create an algorithm that can output the essence of that work.

                                                                                                                                                                                                                                                        For image models, people are often pretty surprised to learn that they are only a few gigabytes in size, despite training on petabytes of images.

                                                                                                                                                                                                                                                  • no_op 2 hours ago

                                                                                                                                                                                                                                                    Non-general AI won't cause mass unemployment, for the same reason previous productivity-enhancing tech hasn't. So long as humans can create valuable output machines can't, the new, higher-output economy will figure out how to employ them. Some won't even have to switch jobs, because demand for what they provide will be higher as AI tools bring down production costs. This is plausible for SWEs. Other people will end up in jobs that come into existence as a result of new tech, or that presently seem too silly to pay many people for — this, too, is consistent with historical precedent. It can result in temporary dislocation if the transition is fast enough, but things sort themselves out.

                                                                                                                                                                                                                                                    It's really only AGI, by eclipsing human capabilities across all useful work, that breaks this dynamic and creates the prospect of permanent structural unemployment.

                                                                                                                                                                                                                                                    • fulafel an hour ago

                                                                                                                                                                                                                                                      We do have emplyoment problems arguably caused by tech, currently the bar of minimum viable productivity is higher than before in a lot of countries. In western welfare states there aren't jobs anymore for people who were doing groundskeeper ish things 50 years ago (apart from public sector subsidized employment programs).

                                                                                                                                                                                                                                                      We need to come up with ways of providing meaningful roles for the large percentage of people whose peg shape doesn't fit the median job hole.

                                                                                                                                                                                                                                                    • snowwrestler 2 hours ago

                                                                                                                                                                                                                                                      If an AI can do my job, why would my employer fire me? Why wouldn’t they be excited to get 200% productivity out of me for the marginal cost of an AI seat license?

                                                                                                                                                                                                                                                      A lot of the predictions of job loss are predicated on an unspoken assumption that we’re sitting at “task maximum” so any increase in productivity must result in job loss. It’s only true if there is no more work to be done. But no one seems to be willing or able or even aware that they need to make that point substantively—to prove that there is no more work to be done.

                                                                                                                                                                                                                                                      Historically, humans have been absolutely terrible at predicting the types and volumes of future work. But we’ve been absolutely incredible at inventing new things to do to keep busy.

                                                                                                                                                                                                                                                      • dizzydes 16 minutes ago

                                                                                                                                                                                                                                                        It depends on who your employer is.

                                                                                                                                                                                                                                                        If they're high growth yes, if they're in the majority of businesses that are just trying to maximise profit with negligible or no growth then likely not.

                                                                                                                                                                                                                                                        • ninetyninenine 2 hours ago

                                                                                                                                                                                                                                                          > If an AI can do my job, why would my employer fire me? Why wouldn’t they be excited to get 200% productivity out of me for the marginal cost of an AI seat license?

                                                                                                                                                                                                                                                          They’d be excited at getting 100x of 100% output out an AI for 20 dollars a month and laying you off as redundant. If you aren’t scared of the potential of this technology you are lying to yourself.

                                                                                                                                                                                                                                                          • snowwrestler 2 hours ago

                                                                                                                                                                                                                                                            Again: I’m only made redundant if there is no more work that my employer needs me to do.

                                                                                                                                                                                                                                                            Why should I be scared of technology that makes me more productive?

                                                                                                                                                                                                                                                            • physicsguy an hour ago

                                                                                                                                                                                                                                                              This assumes that the bottleneck to profitability is the limit of software engineers they can afford to hire.

                                                                                                                                                                                                                                                              If they’re happy with current rate of progress (and in many companies that is the case), then a productivity increase of 100% means they need half the current number of engineers.

                                                                                                                                                                                                                                                              • ninetyninenine an hour ago

                                                                                                                                                                                                                                                                Why would there be more work left for you to do if AI can do it in seconds?

                                                                                                                                                                                                                                                                Ai is trending towards a point where it makes your employers productive such that they don’t need you.

                                                                                                                                                                                                                                                          • jonplackett 2 hours ago

                                                                                                                                                                                                                                                            When electricity got cheap - we use MORE electricity.

                                                                                                                                                                                                                                                            Think how many places you see shitty software currently.

                                                                                                                                                                                                                                                            My wife was just trying to use an app to book a test with the doctor - did not work at all. The staff said they know it doesn’t work. They still give out the app.

                                                                                                                                                                                                                                                            We are surrounded by awful software. There’s a lot of work to do- if it could be done cheaper. Currently only rich companies can make great software.

                                                                                                                                                                                                                                                            • rty32 an hour ago

                                                                                                                                                                                                                                                              Well, that probably happens to some extent, but I am quite confident that some smaller shops will just say "Hey make an app that works 50% of the time and that's good enough." then fire half of the staff.

                                                                                                                                                                                                                                                              Oh, not just smaller shops, I have many issues with Android and other Google products -- from bugs to things that just don't work that have existed for decades, and there is no action on those over the years. Surely Google has the resources? Right? riiight?

                                                                                                                                                                                                                                                              This is a human problem, not a technology problem.

                                                                                                                                                                                                                                                              • eitland an hour ago

                                                                                                                                                                                                                                                                > We are surrounded by awful software. There’s a lot of work to do- if it could be done cheaper. Currently only rich companies can make great software.

                                                                                                                                                                                                                                                                Lots of the awful software is made by awfully rich companies - and lots of good software is made by bootstraped devs.

                                                                                                                                                                                                                                                                To mention some interesting examples, both Amazon and Google has gone from great to meh soon after they went from startups to entrenched market leaders.

                                                                                                                                                                                                                                                              • deathanatos 43 minutes ago

                                                                                                                                                                                                                                                                > What if SWEs get 50% more efficient and they fire half?

                                                                                                                                                                                                                                                                This is kinda ironic in a thread that's basically about the AI hype landscape, but you've just reduced the amount of SWE "power" your example organization has there by 25%.

                                                                                                                                                                                                                                                                • schnitzelstoat 3 hours ago

                                                                                                                                                                                                                                                                  I agree on the first two points.

                                                                                                                                                                                                                                                                  On the third point, I think we've always seen this happen even in massive shocks like the Industrial Revolution (and the Second Industrial Revolution with assembly lines etc. and the Computer Age)

                                                                                                                                                                                                                                                                  It might be hard for people to retrain to whatever the new opportunities are though. Although perhaps somewhat easier nowadays with the internet etc.

                                                                                                                                                                                                                                                                  • kranke155 3 hours ago

                                                                                                                                                                                                                                                                    The Industrial Revolution was a period of massive economic growth that was coupled with decline in the quality of life of the average worker.

                                                                                                                                                                                                                                                                    This Economist article talks about how looking at the historical data, historians now see that the height of the average Englishman actually went down due to malnutrition. https://web.archive.org/web/20210905065401/https://www.econo...

                                                                                                                                                                                                                                                                    The myth that the Industrial Revolution was a wonderful time is just that, a myth. The actual reality of the AI revolution will likely be the same. Record number of billionaires and record number of people in deep poverty at the same time.

                                                                                                                                                                                                                                                                    • gazook89 an hour ago

                                                                                                                                                                                                                                                                      Do people really think the Industrial Revolution was “a wonderful time”? Basically the first thought to comes to mind for me is massive migration to urban centers, along with huge amount of poverty and squalid living conditions and disassociation with your own labor. I feel like that was basically what was taught to me in High School too, not like some recently learned insight.

                                                                                                                                                                                                                                                                      And I agree with you. Further, the argument about economic prosperity isn’t equal for everyone. And increased worker efficiency isn’t directly (or sometimes at all) linked to worker satisfaction or even increased wages.

                                                                                                                                                                                                                                                                      • kranke155 13 minutes ago

                                                                                                                                                                                                                                                                        I’ve heard some people say it. That economic disruption doesn’t matter because “all the pieces fall into place” eventually and the Industrial Revolution being an example.

                                                                                                                                                                                                                                                                      • j_maffe 33 minutes ago

                                                                                                                                                                                                                                                                        Well, yeah but right now we're reaping many benefits from the industrial revolution. Malnutrition for sure. Not saying it's the same as the AI boom though.

                                                                                                                                                                                                                                                                      • visarga 2 hours ago

                                                                                                                                                                                                                                                                        One thing AI is good at is explaining things, we can retrain faster and better with it.

                                                                                                                                                                                                                                                                        • beepbooptheory 3 hours ago

                                                                                                                                                                                                                                                                          The industrial revolution didn't lay everyone off, it just made everyone's jobs worse in every possible sense.

                                                                                                                                                                                                                                                                          • visarga 2 hours ago

                                                                                                                                                                                                                                                                            I don't think many would switch current life with pre-industrial life. You are idealising the past and discounting the present

                                                                                                                                                                                                                                                                            • beepbooptheory an hour ago

                                                                                                                                                                                                                                                                              Not trying to value life in general at all, just the nature of the jobs. You might reply "distinction without a difference," and well, the fact that you'd think so would be one of my points about the labor ;).

                                                                                                                                                                                                                                                                              Personally, preindustrial life sounds pretty rough, but its all just apples and oranges! The future will continue to happen, to critique the present and how we got here is not to exhort the past (unless, you know, you are a particularly conservative person I guess).

                                                                                                                                                                                                                                                                      • kranke155 4 hours ago

                                                                                                                                                                                                                                                                        Might as well share the original NYT article, since this one is a poor summary of that one - https://web.archive.org/web/20240926063521/https://www.nytim...

                                                                                                                                                                                                                                                                        • wslh 3 hours ago

                                                                                                                                                                                                                                                                          Sorry, but I think the original article is more insightful about the complexity and reality of making chips, which TSMC knows well, compared to moonshots by people who don't fully understand the intricacies of the semiconductors supply chain.

                                                                                                                                                                                                                                                                          • kranke155 3 hours ago

                                                                                                                                                                                                                                                                            Huh? What original article? The Toms Hardware article is a copy and paste of the NYT one.

                                                                                                                                                                                                                                                                            • wslh 2 hours ago

                                                                                                                                                                                                                                                                              The original article I was referring to is the NYT piece, which covers more of the complexities around chip manufacturing and energy needs. While the Tom's Hardware article draws from it, the summary focuses more on personality clashes, losing some of the nuanced details about supply chain challenges.

                                                                                                                                                                                                                                                                              • kranke155 2 hours ago

                                                                                                                                                                                                                                                                                You realise I shared the original article right? And OP shared the summary?

                                                                                                                                                                                                                                                                                • wslh an hour ago

                                                                                                                                                                                                                                                                                  What I'm saying is that the two articles are not the same. For example, the term "moonshot" as in "Moonshot dreams crash to earth at TSMC" is specific to one article and not the other. I think it sets a clearer tone, even if one is based on the other.

                                                                                                                                                                                                                                                                        • AlexandrB 3 hours ago

                                                                                                                                                                                                                                                                          It's ironic that Sam Altman's background is in YC, because this is the opposite of startup thinking. Instead of scrappy disruption he seems to want massive investment upfront with only vague ideas of what the technology could be used for.

                                                                                                                                                                                                                                                                          • stackghost 2 hours ago

                                                                                                                                                                                                                                                                            >he seems to want massive investment upfront with only vague ideas of what the technology could be used for

                                                                                                                                                                                                                                                                            That sounds exactly like venture-funded startup thinking to me.

                                                                                                                                                                                                                                                                            • AdamN 3 hours ago

                                                                                                                                                                                                                                                                              They already know that some of these upcoming model builds are going to require $100MM and possibly even a billion dollars. That is just the compute costs for building a single model. Product-market fit is basically established and the cost structure is basically understood so he needs the money for pretty straighforward reasons.

                                                                                                                                                                                                                                                                              With that said it's a gamble since somebody might come along with a $10MM model based on some new technique and then OpenAI's cost structure becomes a problem. Presumably their scientists could adapt pretty quickly though if that happened.

                                                                                                                                                                                                                                                                              • torlok 3 hours ago

                                                                                                                                                                                                                                                                                The second sentence describes a lot of startups.

                                                                                                                                                                                                                                                                                • pyb 3 hours ago

                                                                                                                                                                                                                                                                                  It describes bad startups.

                                                                                                                                                                                                                                                                              • Rinzler89 4 hours ago

                                                                                                                                                                                                                                                                                Based. The more I read about TSMC, the more I like them.

                                                                                                                                                                                                                                                                                • mft_ an hour ago

                                                                                                                                                                                                                                                                                  Eh; the Russians laughed at Musk, before he started SpaceX; too.

                                                                                                                                                                                                                                                                                  • Rinzler89 32 minutes ago

                                                                                                                                                                                                                                                                                    I doubt OpenAI will start challenging TSMC at building chips anytime soon.

                                                                                                                                                                                                                                                                                • tempodox 2 hours ago

                                                                                                                                                                                                                                                                                  Altman is used to deal with people whose medium is mostly hot air. I'd expect a chip manufacturer to have a healthy allergy against that. Having your process filled with 99% bullshit might be good enough if the desired outcome is investment, but if you want to produce functional hardware, you can't afford that.

                                                                                                                                                                                                                                                                                  • isodev 3 hours ago

                                                                                                                                                                                                                                                                                    I'm absolutely flabbergasted how much money and political capital is spent on vaporware, while there are very concrete and very impactful challenges we could be addressing instead.

                                                                                                                                                                                                                                                                                    • csomar an hour ago

                                                                                                                                                                                                                                                                                      Capital is spent to acquire more power, not to give power to some random people. This is the reality of the world we are living in.

                                                                                                                                                                                                                                                                                      • brookst 3 hours ago

                                                                                                                                                                                                                                                                                        There’s a lot of money to be made if you can reliable distinguish the two in advance.

                                                                                                                                                                                                                                                                                        • petee 2 hours ago

                                                                                                                                                                                                                                                                                          My grandma loved to remind my dad about how he didn't think CDs would become a thing, and to not invest in Philips (as i recall)

                                                                                                                                                                                                                                                                                          And he was tech savy, building circuits, computers etc, he just didn't think it would be reliable or better than what existed... definitely a vinyl guy.

                                                                                                                                                                                                                                                                                          • Workaccount2 28 minutes ago

                                                                                                                                                                                                                                                                                            Conversely I sold thousands of bitcoins years ago once I learned the technology and recognized how stupid/useless it was.

                                                                                                                                                                                                                                                                                            Didn't pan out well for me.

                                                                                                                                                                                                                                                                                            • tetris11 26 minutes ago

                                                                                                                                                                                                                                                                                              that's curious, what was the alternative to the CD at the time?

                                                                                                                                                                                                                                                                                            • drawkward 2 hours ago

                                                                                                                                                                                                                                                                                              There's even more money to be made off the backs of those who can't.

                                                                                                                                                                                                                                                                                              • isodev 3 hours ago

                                                                                                                                                                                                                                                                                                I don't think there is appetite for this, in big tech especially. People are looking for glamorous next big things, everyone wants to be the first in some green field thing instead of digging into "the boring stuff we've been hearing about since forever". Imagine, even Apple completely enshittified their OSes just so they're on the bandwagon of AI with zero added value.

                                                                                                                                                                                                                                                                                                • fhd2 2 hours ago

                                                                                                                                                                                                                                                                                                  Exactly. And now, a possibly terrifying question: What if there just is not going to be a "next big thing"?

                                                                                                                                                                                                                                                                                                  Population size is about to peak. Up until now, for as long as we know, it has been growing. Starting at the latest with colonisation, we've had more people, more resources, new markets advancing into buyers of new products. Once societies advance to a certain point, they begin to shrink, this is well studied.

                                                                                                                                                                                                                                                                                                  Without these growth factors, does it seem likely we'll see something as transformative as the automobile or the internet again?

                                                                                                                                                                                                                                                                                                  Possibly bleak and badly informed, but I find it plausible to think that the party is about to end. Most of us here have probably seen what happens to a company when they stop growing. Spoiler: It's typically not innovation.

                                                                                                                                                                                                                                                                                                  • fellowmartian 42 minutes ago

                                                                                                                                                                                                                                                                                                    I’m not the biggest AI fanboy, but AI is the solution to this. You’re right that the population is about to peak, and we’ll stop adding biological brains that can come up with new things, but if we crack real AGI then we’ll have many more orders of magnitude of mechanical brains that can do the same.

                                                                                                                                                                                                                                                                                              • anon291 an hour ago

                                                                                                                                                                                                                                                                                                AI is being used for all sorts of 'very concrete' and 'very impactful' challenges, and has been for the last decade.

                                                                                                                                                                                                                                                                                                • nicce 3 hours ago

                                                                                                                                                                                                                                                                                                  Imagine how much food you get for every person in U.S. with 7 trillion. You probably get some roofs too.

                                                                                                                                                                                                                                                                                                  • anon291 an hour ago

                                                                                                                                                                                                                                                                                                    The US has -- if anything -- too much food. If someone is starving in America, it's 100% due to them having no interest in acquiring the free food that is widely available almost everywhere.

                                                                                                                                                                                                                                                                                                    • Workaccount2 23 minutes ago

                                                                                                                                                                                                                                                                                                      As an interesting aside, the US measures hunger not by metrics of starvation, but by metrics of "feeling hungry and not being able to quench that sensation". They call this "food insecurity".

                                                                                                                                                                                                                                                                                                      So you end up with a whole bunch of poor overweight people who need 4500 cals a day to sustain their mass, reporting that they have a hard time sustaining their diet. Obesity is a huge problem in lower income demographics, the same demographics that report high food insecurity.

                                                                                                                                                                                                                                                                                                    • layer8 an hour ago

                                                                                                                                                                                                                                                                                                      You won’t get those 7 trillions back though, which is what they’d be hoping for.

                                                                                                                                                                                                                                                                                                  • wg0 3 hours ago

                                                                                                                                                                                                                                                                                                    > OpenAI’s business model, as it exists today, doesn’t really inspire confidence, as it seems to exist on the promise of ‘jam tomorrow.’ Specifically, the firm has an income of approximately $3 billion per year, which is put in deep shade by its $7 billion annual expenditure.

                                                                                                                                                                                                                                                                                                    So a yearly loss of 4 billion dollars or losing $10 million daily. The IPOs basically (mostly) have been operating just the way Bitcoin has been - Next bigger fool theory. That you pass on the potato to the next bigger fool till it ends to the last in the chain like Twitter who doesn't know what to do to churn out those inflated sums paid along the chain.

                                                                                                                                                                                                                                                                                                    Not sure how many days OpenAI can afford to lose 10 million dollars daily but current macroeconomic environment, prospects aren't great.

                                                                                                                                                                                                                                                                                                    • zurfer an hour ago

                                                                                                                                                                                                                                                                                                      Amazon was not profitable for many years, yet they are one of the most valuable companies in the world. Why? Because they kept growing.

                                                                                                                                                                                                                                                                                                      What OpenAI is doing is "sustainable" if they keep growing like that. From summer 2023 to summer 2024 they 6x their revenue to 3.4B USD. If they 4x next year and 3x the year after, they would make 40B in revenue.

                                                                                                                                                                                                                                                                                                      • mrbungie an hour ago

                                                                                                                                                                                                                                                                                                        That's a common misconception: They reinvested their money back into Amazon, which when justified with a sound strategy to investors makes sense. It was kept by design at an almost 0 profit, but non profitable != operating at a loss (vs OAI, that has been predicted to loss up to 5B this year).

                                                                                                                                                                                                                                                                                                        • j_maffe 30 minutes ago

                                                                                                                                                                                                                                                                                                          You're assuming exponential growth based on early growth rates. Those rates are extremely unlikely.

                                                                                                                                                                                                                                                                                                        • shubhamjain 2 hours ago

                                                                                                                                                                                                                                                                                                          Most of the expense can probably be attributed to training and operating their free-tier. They can shut off both of them any day and become insanely profitable. I don't think people realize the sheer magnitude of reaching $3B ARR in under two years. Is there an AI hype? Sure. Is it a crypto-like bubble just where everyone is just peddling BS? No, not even close.

                                                                                                                                                                                                                                                                                                          • iphoneisbetter 2 hours ago

                                                                                                                                                                                                                                                                                                            > I don't think people realize the sheer magnitude of reaching $3B ARR in under two years.

                                                                                                                                                                                                                                                                                                            These things impress people who fail to digest business fundamentals. Anyone with a suit and a slick powerpoint preso can in theory start a money-incinerating machine. The real measurement is the profit. And before someone makes the "capture the market" comment - this isn't ZIRP anymore. Money costs fuckin money now.

                                                                                                                                                                                                                                                                                                            • wg0 2 hours ago

                                                                                                                                                                                                                                                                                                              Training costing billions can't be. Biggest line item would be salaries. Probably.

                                                                                                                                                                                                                                                                                                              Also, revenue you can't take home. Doesn't matter how many billions.

                                                                                                                                                                                                                                                                                                              It is the profit that matters in the end.

                                                                                                                                                                                                                                                                                                              • gotaran 2 hours ago

                                                                                                                                                                                                                                                                                                                OpenAI has no clear moat though. If they shut off training, everyone would just move over to a competitor.

                                                                                                                                                                                                                                                                                                            • hlanders10 3 hours ago

                                                                                                                                                                                                                                                                                                              He may have the last laugh. His traveling around the world and threatening to build fabs in the UAE and Taiwan is a diplomatic scheme to get the U.S. hawks into action.

                                                                                                                                                                                                                                                                                                              And lo and behold, ClopenAI has already hired CHIPS act people:

                                                                                                                                                                                                                                                                                                              "To bolster its efforts, OpenAI has hired Chris Lehane, a Clinton White House lawyer, as its vice president of global policy, along with two people from the Commerce Department who worked on the CHIPS Act, a bipartisan law designed to increase domestic chip manufacturing. One of them will manage future infrastructure projects and policy."

                                                                                                                                                                                                                                                                                                              We'll build plants overseas if you don't give us CHIPS Act money is a great scheme.

                                                                                                                                                                                                                                                                                                              • CharlieDigital 2 hours ago

                                                                                                                                                                                                                                                                                                                > Build fabs

                                                                                                                                                                                                                                                                                                                Such a misguided take.

                                                                                                                                                                                                                                                                                                                If it were that easy to build fabs to compete with TSMC, Intel would do it. China would do it.

                                                                                                                                                                                                                                                                                                                Hardware is way harder than software.

                                                                                                                                                                                                                                                                                                                • mrweasel 40 minutes ago

                                                                                                                                                                                                                                                                                                                  Which is also why it made good sense for TSMC to dismiss Altman. Some software/crypto/podcast/AI-hype bro shows up and asks for 36 fabs to be build, with no plan for monetizing them. Intel is struggling with their fab business as is, it's a difficult business to be in. Asking for 36 fabs, worth 7 trillion dollars is just absurd. Even if TSMC got the money up front, they'd still be holding those fabs and paying for their upkeep or dismantlement when/if OpenAIs predictions turn out to be wrong.

                                                                                                                                                                                                                                                                                                                  It's just an insane risk to take on.

                                                                                                                                                                                                                                                                                                                  • klhah 2 hours ago

                                                                                                                                                                                                                                                                                                                    Threatening to build fabs is not addressed at you but designed to trigger idiot politicians with influence to unlock CHIPS money. The politicians do not know or care if building fabs is possible.

                                                                                                                                                                                                                                                                                                                  • iphoneisbetter 2 hours ago

                                                                                                                                                                                                                                                                                                                    Or maybe, just maybe, TSMC has gaslit all of us to think it is much harder than it really is. If Taiwan isn't needed for its chips, what is the strategic interest of the USG?

                                                                                                                                                                                                                                                                                                                    • jsheard 2 hours ago

                                                                                                                                                                                                                                                                                                                      It's not like nobody is trying to compete with TSMC, Samsung and Intel are doing their best, but TSMC are consistently ahead despite all of these companies using the same ASML lithography machines.

                                                                                                                                                                                                                                                                                                                      • EraYaN 2 hours ago

                                                                                                                                                                                                                                                                                                                        TSMC has way to many competitors for this to make sense. And besides one very important key supplier who has huge wait lists for their EUV stuff, ASML.

                                                                                                                                                                                                                                                                                                                        • CharlieDigital 2 hours ago

                                                                                                                                                                                                                                                                                                                          Sometimes the evidence speaks for itself. I'm not an expert in this field, but the fact that literally no one in the world can compete toe to toe with TSMC is a sign of ... something. If nothing else, "Taiwan #1".

                                                                                                                                                                                                                                                                                                                        • osnium123 2 hours ago

                                                                                                                                                                                                                                                                                                                          No, TSMC is really that special. Intel has been struggling for a while now.

                                                                                                                                                                                                                                                                                                                      • alephnerd 2 hours ago

                                                                                                                                                                                                                                                                                                                        > We'll build plants overseas if you don't give us CHIPS Act money is a great scheme

                                                                                                                                                                                                                                                                                                                        The CHIPS act has provisions to help fund fabs in allies.

                                                                                                                                                                                                                                                                                                                        This is why Biden and Modi announced a Feb dedicated to US and Indian defense systems [0] at the QUAD summit as well as as the designation of the UAE as a "Major Defence Partner" along with India [1] which includes tech transfer conditions.

                                                                                                                                                                                                                                                                                                                        A significant portion of the CHIPS and IRA is set aside to help with subsidizing international allies tech and innovation ecosystems in order to ensure they don't lean towards China [2]

                                                                                                                                                                                                                                                                                                                        [0] - https://www.bloomberg.com/news/articles/2024-09-23/biden-mod...

                                                                                                                                                                                                                                                                                                                        [1] - https://www.reuters.com/world/us/harris-plans-raise-gaza-cea...

                                                                                                                                                                                                                                                                                                                        [2] - https://cset.georgetown.edu/publication/agile-alliances/

                                                                                                                                                                                                                                                                                                                        • Prbeek an hour ago

                                                                                                                                                                                                                                                                                                                          These designations are nothing but schemes to lock in major weapon buyers into the US MIC ecosystem.

                                                                                                                                                                                                                                                                                                                          • alephnerd an hour ago

                                                                                                                                                                                                                                                                                                                            Partially, but defense is the biggest buyer for electronics - which is why the CHIPS Act and "Supply Chain Security" became a thing.

                                                                                                                                                                                                                                                                                                                            For a number of commodity components, there was a heavy reliance on China due to low margins. Now there is a push to move those portions of the supply chain away to other partners.

                                                                                                                                                                                                                                                                                                                      • actuallyalys 3 hours ago

                                                                                                                                                                                                                                                                                                                        The analogy to electricity doesn’t make sense to me. OpenAI’s and their competitors’ products are already widely available, companies have added to AI to a bunch of apps, and the cost per token has already come down significantly. (Edit: not to mention openly licensed models.)

                                                                                                                                                                                                                                                                                                                        I also don’t see think it’s historically accurate. My understanding was that widespread electrification were motivated by lighting, the radio, and other applications. Getting electricity to more people probably helped spur further development, sure, but there were already “killer apps.”

                                                                                                                                                                                                                                                                                                                        • anon291 an hour ago

                                                                                                                                                                                                                                                                                                                          The 'token economy' is but one facet of an AI revolution. From my perspective, the main point of LLMs has been to simply be marketing. No one wants to hear about protein folding, physics sim, etc.

                                                                                                                                                                                                                                                                                                                        • lvl155 3 hours ago

                                                                                                                                                                                                                                                                                                                          Sam is basically a marketing person at this point. If you have the chops to be a leading AI dev, then you likely don’t need or want someone like him to skim a big chunk off of top.

                                                                                                                                                                                                                                                                                                                          • joelthelion 4 hours ago

                                                                                                                                                                                                                                                                                                                            7 trillion is absurd. The recent advancements in AI are real, but not on the order of 7T.

                                                                                                                                                                                                                                                                                                                            • cdchn 3 hours ago

                                                                                                                                                                                                                                                                                                                              I think the 7T number is his concept of what he can probably get a bunch of oil sheiks to give him.

                                                                                                                                                                                                                                                                                                                              • j_maffe 25 minutes ago

                                                                                                                                                                                                                                                                                                                                The GDP of UAE is 500B, 1.1T in Saudi, and 230B in Qatar. It'd take them decades to provide that kind of money.

                                                                                                                                                                                                                                                                                                                                • churchill 3 minutes ago

                                                                                                                                                                                                                                                                                                                                  All the sovereign wealth funds of the entire Arab Gulf hold less than $3 trillion, saved over decades. And mostly illiquid assets where you can get a 20-30% haircut if you liquidate quickly. Even if they wanted to, they can't even give him $100b/yr for 10 years (i.e., $1 trillion). It's just not feasible or possible. They don't have it anywhere.

                                                                                                                                                                                                                                                                                                                              • anon291 an hour ago

                                                                                                                                                                                                                                                                                                                                You forget the money supply has been drastically inflated due to COVID, et al

                                                                                                                                                                                                                                                                                                                                • unnouinceput 2 hours ago

                                                                                                                                                                                                                                                                                                                                  I remember the story about Larry and Sergei pitching their new search engine to investors: "Will make 10B annually in maximum 10 years". Nowadays 10B is the paper clips fund at Alphabet.

                                                                                                                                                                                                                                                                                                                                  • aero-glide2 4 hours ago

                                                                                                                                                                                                                                                                                                                                    they will be. Looking at the rate of progress (just tried o1mini), its inevitable.

                                                                                                                                                                                                                                                                                                                                    • AlexandrB 3 hours ago

                                                                                                                                                                                                                                                                                                                                      I once read that on the 80s it was believed that female runners would soon outpace male runners because the trend line for them was moving up so fast. This turned out not to be the case because the curve wasn't exponential but "S" shaped - the female runners eventually plateaued. It's easy to assume that exponential growth will continue indefinitely but it's rarely the case that this is true.

                                                                                                                                                                                                                                                                                                                                      • automatic6131 an hour ago

                                                                                                                                                                                                                                                                                                                                        Wow, I didn't expect this to be true (that anyone believed that) BUT

                                                                                                                                                                                                                                                                                                                                        https://www.nytimes.com/1992/01/07/science/2-experts-say-wom...

                                                                                                                                                                                                                                                                                                                                        But it was just two people and they were criticized by peers for being space cadets at the time. There are always some people ready to make a fool of themselves for recognition, perhaps.

                                                                                                                                                                                                                                                                                                                                        • njarboe 3 hours ago

                                                                                                                                                                                                                                                                                                                                          Agreed, but when will the exponential bend over. Moore's law went on for a long time, industrial revolution, population growth. Very hard to know in advance.

                                                                                                                                                                                                                                                                                                                                        • DebtDeflation 2 hours ago

                                                                                                                                                                                                                                                                                                                                          >the rate of progress

                                                                                                                                                                                                                                                                                                                                          Let's talk about that. GPT-3.5 (specifically text-davinci-003) was a massive leap over everything that came before it. GPT-4 was a massive leap over GPT-3.5. Everything since (GPT-4 Turbo, GPT-4o, GPT o1) feels like steady incremental improvement (putting aside multimodal capabilities) rather than massive leaps. I'm far from convinced that the rate of progress made with foundation models in 2022-2023 has continued in 2024 let alone from extrapolating that it will continue for the next several years.

                                                                                                                                                                                                                                                                                                                                          • jsheard 3 hours ago

                                                                                                                                                                                                                                                                                                                                            Even if we accept that premise, why should OpenAI be the ones to manage a $7T investment in hardware and datacenter development over, y'know, hardware and datacenter companies like Nvidia and Amazon? OpenAI has zero experience in those fields.

                                                                                                                                                                                                                                                                                                                                            • cdchn 3 hours ago

                                                                                                                                                                                                                                                                                                                                              One of these parties cares more about where the money comes from than the other.

                                                                                                                                                                                                                                                                                                                                            • gtirloni 3 hours ago

                                                                                                                                                                                                                                                                                                                                              If it was, companies like Microsoft and Apple would have acquired them a long time ago. The fact they decided to have a partnership means they have reason to believe the hype isn't real beyond what we already know today and they don't want to have to explain it to shareholders in the near future.

                                                                                                                                                                                                                                                                                                                                              • llmfan 3 hours ago

                                                                                                                                                                                                                                                                                                                                                Doesn't Microsoft have a decent amount of ownership in some aspect of OpenAI's labyrinthine structure?

                                                                                                                                                                                                                                                                                                                                                Take Anthropic for a more legible example: Amazon and Google both gladly bought large stakes in it.

                                                                                                                                                                                                                                                                                                                                                • gtirloni 3 hours ago

                                                                                                                                                                                                                                                                                                                                                  Why buy stakes and not the whole thing to block your competitors from making deals with them? This is big corp 101 and has happened countless times. It's even more likely if OpenAI is to be a $7T company in the future. Such an acquisition would be approved in a second... if the hype was real.

                                                                                                                                                                                                                                                                                                                                                  • 4ggr0 3 hours ago

                                                                                                                                                                                                                                                                                                                                                    > Why buy stakes and not the whole thing

                                                                                                                                                                                                                                                                                                                                                    To give less reasons for people and governments to accuse them of monopolistic behaviour. Maybe. Wild speculations from my end.

                                                                                                                                                                                                                                                                                                                                                • aero-glide2 3 hours ago

                                                                                                                                                                                                                                                                                                                                                  They haven't acquired because Open AI is not for sale.

                                                                                                                                                                                                                                                                                                                                                  • gtirloni 3 hours ago

                                                                                                                                                                                                                                                                                                                                                    There's no such thing. Every company is up for sale, if you have the money.

                                                                                                                                                                                                                                                                                                                                                    Rare exceptions are old companies where the founder is still around and rejects deal after deal on pride. OpenAI is nothing like that, as the profit/non-profit drama exemplifies.

                                                                                                                                                                                                                                                                                                                                            • dboreham 3 hours ago

                                                                                                                                                                                                                                                                                                                                              This guy doesn't sound like he understands engineering (checks Wikipedia...ok dropped out of Stanford). Perhaps figure out how to not need the money and electricity from the entire planet first before embarking on plan to use same.

                                                                                                                                                                                                                                                                                                                                              • spamizbad an hour ago

                                                                                                                                                                                                                                                                                                                                                TSMC shade aside, I think Altman is keenly aware that AI is still too capital intensive for short-to-medium term viability and I think his hope to was to probe the semiconductor industry's willingness to build-out more fabs to drive the costs down substantially. I suspect Altman was being bluntly honest with them about the economics of AI rather than enthusiastically telling them they should immediately build 37 fabs because surely that's super easy.

                                                                                                                                                                                                                                                                                                                                                Blackwell cabinets are $3M each and have significant power requirements and thus you'd likely need to be selling your products at $100/month/seat to turn a slim profit.

                                                                                                                                                                                                                                                                                                                                                • threeseed 3 hours ago

                                                                                                                                                                                                                                                                                                                                                  So many weird things going on with this.

                                                                                                                                                                                                                                                                                                                                                  a) Given how geopolitical semiconductors and AI are right now. Why he is running around the world running his mouth seemingly without the cooperation of the US government.

                                                                                                                                                                                                                                                                                                                                                  b) What does Microsoft have to say about OpenAI building a parallel cloud.

                                                                                                                                                                                                                                                                                                                                                  c) OpenAI has zero competence in designing and building their own end to end stack but yet they are going to jump straight to $7T worth of infrastructure. Even Apple acquired PA Semiconductor and a number of smaller startups to build out their team.

                                                                                                                                                                                                                                                                                                                                                  • thruway516 3 hours ago

                                                                                                                                                                                                                                                                                                                                                    Didn't you know, according to ___ [insert favorite visionary] if you're not building an impossible moonshot you're a nothing and a nobody.

                                                                                                                                                                                                                                                                                                                                                    • anon291 an hour ago

                                                                                                                                                                                                                                                                                                                                                      > a) Given how geopolitical semiconductors and AI are right now. Why he is running around the world running his mouth seemingly without the cooperation of the US government.

                                                                                                                                                                                                                                                                                                                                                      As an American citizen you don't need to cooperate with the US government to talk to other people.

                                                                                                                                                                                                                                                                                                                                                      • joeybloey 39 minutes ago

                                                                                                                                                                                                                                                                                                                                                        They are smart enough to know that none of this will happen, and will have cashed out long before it all crashes and burns

                                                                                                                                                                                                                                                                                                                                                      • throw4847285 44 minutes ago

                                                                                                                                                                                                                                                                                                                                                        This is going to sound nuts, but I think CEOs like Altman have more in common with historical monarchs than modern Democratic leaders. In a Democracy, there's more transparency into the degree to which everything the President/PM does is actually delegated out. Joe Biden may be the Commander in Chief, but just watch a political military thriller and you'll see that culturally, we understand the role of the Joint Chiefs of Staff in having the actual depth of knowledge the President lacks.

                                                                                                                                                                                                                                                                                                                                                        But CEOs often practice a form of charismatic leadership whereby you have to project having more knowledge than you actually do. That is because their leadership is both less democratic and less secure than their counterparts. You've got to be constantly maneuvering against challenges to your legitimacy.

                                                                                                                                                                                                                                                                                                                                                        I think it has something to do with the Gemeinschaft–Gesellschaft dichotomy from German sociology. You might think that a tech company would be the quintessential form of Gesellschaft, but in reality, the community of tech CEOs and VCs is far more like a social club than it is like a rational, contractual relationship.

                                                                                                                                                                                                                                                                                                                                                        • ChrisArchitect an hour ago
                                                                                                                                                                                                                                                                                                                                                          • imaginationra 2 hours ago

                                                                                                                                                                                                                                                                                                                                                            Grifters gonna grift-

                                                                                                                                                                                                                                                                                                                                                            My nerd hype died as I used gpt4+ and Claude 3.5+ whatever for coding tasks and realized if a human somewhere in the training data hadn't already solved a programming problem/wrote code for a solution the LLM's are useless for said problem.

                                                                                                                                                                                                                                                                                                                                                            They're useful as an assistant/junior coder to do repetitive tasks/things you/other humans already have already done tho. BUT that's not going to revolutionize programming as you already have to know how to code to use them.

                                                                                                                                                                                                                                                                                                                                                            * and "hallucinations" should be referred to as "bullshitting" when it comes to LLM's as that's what it really is.

                                                                                                                                                                                                                                                                                                                                                            • anon291 an hour ago

                                                                                                                                                                                                                                                                                                                                                              > My nerd hype died as I used gpt4+ and Claude 3.5+ whatever for coding tasks and realized if a human somewhere in the training data hadn't already solved a programming problem/wrote code for a solution the LLM's are useless for said problem.

                                                                                                                                                                                                                                                                                                                                                              See my experience is the opposite, and gpt3 was able to answer questions about the niche Haskell library I wrote better than I was. Unless you posit that there's some secret cabal of people using this Haskell library and writing answers for GPT to consume somewhere in the depths of OpenAIs org structure, it's pretty obvious to me that LLMs are quite capable.

                                                                                                                                                                                                                                                                                                                                                              • Mistletoe 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                I was watching Alien last night and as they talked with Mother it was refreshing to see the ship computer just say “I don’t know” when it didn’t know something. If we could get our LLMs there it would be a nice step forward. You can definitely tell ours were made in Silicon Valley, home of the bullshit and fake it till you make it.

                                                                                                                                                                                                                                                                                                                                                              • gepardi 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                That’s hilarious. Maybe there’ll be an episode in some show a decade from now, inspired by this.

                                                                                                                                                                                                                                                                                                                                                                • BaculumMeumEst an hour ago

                                                                                                                                                                                                                                                                                                                                                                  This is like posting a reaction video. Just post the actual source.

                                                                                                                                                                                                                                                                                                                                                                  • KaiserPro 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                    sounds like they made a solid and accurate assessment

                                                                                                                                                                                                                                                                                                                                                                    • SavageBeast 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                      I was having some kind of issue the other day - browser cache corruption issue or ChatGPT hiccuped for a few minutes - but I needed to know an answer for a thing RIGHT NOW. I reluctantly went back to Google and searched for it the old way.

                                                                                                                                                                                                                                                                                                                                                                      Just ... OMFG ... I used to do this all the time? This used to be how you learned things? You mean I get to read 10 sites and 25 dumb answers to get to a possibly correct one? And all the while I got ads coming at me for this and that. A friend was trying to decode some chat convo we had about GLP-1 drugs and get factual information.

                                                                                                                                                                                                                                                                                                                                                                      Google search was nothing but direct ads selling "IF YOU LIKE OZEMPIC YOU WILL LOOOOOVE NOZEMPIC - ITS ALL NATURAL!!" and the content-mill blogs and such that tout the benefits of some "NOZEMPIC" variant.

                                                                                                                                                                                                                                                                                                                                                                      AI has changed the way I learn and research. I have a very capable tutor on demand for any subject there is, any time I want to use it. Its mostly right or "good enough for government work" as the old guys say.

                                                                                                                                                                                                                                                                                                                                                                      I was once an AI hater - now Im an AI evangelist and Im turning people in my sphere of influence into AI users too. Once you show them the value prop they totally get it. Learning and such no longer requires manual work for the 90% case.

                                                                                                                                                                                                                                                                                                                                                                      • meiraleal 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                        > Just ... OMFG ... I used to do this all the time? This used to be how you learned things?

                                                                                                                                                                                                                                                                                                                                                                        No. Google is really not useful anymore, it wasn't like that. It is IMPOSSIBLE to search for what you want, Google just think they know what you want and literally change the words to match. It is crazy.

                                                                                                                                                                                                                                                                                                                                                                        • SavageBeast 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                          I think Google, with regard to Search here, is in the Skin The Sheep phase of its story arc. Innovation is done, market dominance is here, no point in innovating because they're making so much money. The total shit state that is Google Search these days is a tacit admission that search as we know it is over. No more innovating, instead lets suck as much money out of this thing as we can today because its going away. Good news is things become pure profit for Google while they wait for the inevitable changing of the guard and their replacement becomes the new normal.

                                                                                                                                                                                                                                                                                                                                                                        • finikytou 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                          dont forget google wasnt like this at the beginning. OpenAI or competition WILL incorporate ADS into the equation and at the end we will have moved away from google to reproduce the same model

                                                                                                                                                                                                                                                                                                                                                                        • andy_ppp 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                          Arguably compared to the technical people who do the actual work isn’t this a fairly perceptive description?

                                                                                                                                                                                                                                                                                                                                                                          • joeybloey 35 minutes ago

                                                                                                                                                                                                                                                                                                                                                                            There's lots of smart serious people who aren't technical but the problem with Altman is that he isn't serious at all, he's a pure grifter, focused only on headlines

                                                                                                                                                                                                                                                                                                                                                                          • yapyap 27 minutes ago

                                                                                                                                                                                                                                                                                                                                                                            36 new plants, 7 trillion dollars and that would all have to be backed by AI? yikes.

                                                                                                                                                                                                                                                                                                                                                                            I know Sam is gonna be a billionaire assetwise soon but I wonder if it’ll make him more realistic or even more “idealistic” (bigger bullshitter)

                                                                                                                                                                                                                                                                                                                                                                            • mpalmer 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                              This jumped out at me:

                                                                                                                                                                                                                                                                                                                                                                              "The NYT claims it discussed OpenAI's negotiations with nine people close to the discussions but who wish to remain anonymous."

                                                                                                                                                                                                                                                                                                                                                                              "Claims"? Generally newspapers do not lie about the sourcing for a story.

                                                                                                                                                                                                                                                                                                                                                                              • dylan604 an hour ago

                                                                                                                                                                                                                                                                                                                                                                                You have qualms with "claims", yet you use "generally" to describe newspaper dogma. So you accept that newspapers do not always not lie, yet you have issues accepting that sometimes sources have not been verified?

                                                                                                                                                                                                                                                                                                                                                                              • thebigspacefuck 40 minutes ago

                                                                                                                                                                                                                                                                                                                                                                                Maybe the board was right to kick him out

                                                                                                                                                                                                                                                                                                                                                                                • lizknope an hour ago

                                                                                                                                                                                                                                                                                                                                                                                  I read the NY Times article that someone else linked to in the thread.

                                                                                                                                                                                                                                                                                                                                                                                  I've been in semiconductor / chip design for almost 30 years at 8 different companies including 2 that owned their own fabs. I've also worked on 2 different AI accelerator chips.

                                                                                                                                                                                                                                                                                                                                                                                  Altman sounds like someone who has no idea about the industry. The numbers in the article are laughable.

                                                                                                                                                                                                                                                                                                                                                                                  This quote summed it up.

                                                                                                                                                                                                                                                                                                                                                                                  > It is still unclear how all this would work. OpenAI has tried to assemble a loose federation of companies, including data center builders like Microsoft as well as investors and chipmakers. But the particulars of who would pay the money, who would get it and what they would even build are hazy.

                                                                                                                                                                                                                                                                                                                                                                                  He's trying to get a bunch of diverse companies including Microsoft to fund this but it's unclear what kind of chips he actually wants. Many of the big companies including Microsoft are designing their own custom AI chips.

                                                                                                                                                                                                                                                                                                                                                                                  But the article mentions Nvidia. Does OpenAI have any plans to design their own chips or do they just use Nvidia? It may be difficult to get Nvidia competitors to want to join his effort.

                                                                                                                                                                                                                                                                                                                                                                                  > TSMC makes semiconductors for Nvidia, the leading developer of A.I. chips. The plan would allow Nvidia to churn out more chips. OpenAI and other companies would use those chips in more A.I. data centers.

                                                                                                                                                                                                                                                                                                                                                                                  • JCharante an hour ago

                                                                                                                                                                                                                                                                                                                                                                                    > Even implementing a fraction of the OpenAI CEO’s ideas would be incredibly risky, the execs are said to have openly pondered.

                                                                                                                                                                                                                                                                                                                                                                                    I don’t want to live in a world where nobody takes risks.

                                                                                                                                                                                                                                                                                                                                                                                    > However, the latest OpenAI statements have rolled back such talk to "mere" hundreds of billions. It is reported that years of construction time would also be needed to satisfy the OpenAI compute scaling plans.

                                                                                                                                                                                                                                                                                                                                                                                    Yeah so what? It will take years to build so build 36 mega data-centers in parallel.

                                                                                                                                                                                                                                                                                                                                                                                    > the firm has an income of approximately $3 billion per year, which is put in deep shade by its $7 billion annual expenditure.

                                                                                                                                                                                                                                                                                                                                                                                    Those are tiny numbers. Airpods are a $14B/year line of products. OpenAI needs to be ready to spend 100B/year.

                                                                                                                                                                                                                                                                                                                                                                                    > Likewise, Apple launched its iPhone 16 and 16 Pro earlier in the month with a lot of talk about Apple Intelligence, but the first of these AI features won’t be available on the new devices until next month.

                                                                                                                                                                                                                                                                                                                                                                                    I’m on a 15 pro max with the 18.1 PUBLIC Beta. I have these features and so do 16 users. Mark Tyson. This is terrible lazy journalism. You add half true statements to make your point.

                                                                                                                                                                                                                                                                                                                                                                                    • jpm_sd 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                      At this point it's become clear that Sam Altman is an Emissary of the Eschaton (in the Strossian sense)

                                                                                                                                                                                                                                                                                                                                                                                      • jp0001 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                        Well, he’s not their direct customer.

                                                                                                                                                                                                                                                                                                                                                                                        • PedroBatista 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                          Well.. they are not wrong.

                                                                                                                                                                                                                                                                                                                                                                                          Even with the advancements made by OpenAI, some of his takes remind me of a Dollar Store Elon Musk, let alone his despicable way of doing business and seemingly complete lack of any morals.

                                                                                                                                                                                                                                                                                                                                                                                          To your average TSMC exec he's closer to a Jake Paul than to a Lisa Su or even Jensen Huang.

                                                                                                                                                                                                                                                                                                                                                                                          • keepamovin 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                            There’s likely some racial bias involved, along with hesitancy to align too closely with the US, especially given the strong ties to China during these geopolitically tense times.

                                                                                                                                                                                                                                                                                                                                                                                          • finikytou 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                            they are right tho. He is a snake oil vendor and all his A team leaving due likely to his moves and unethical steps taken from day 1 are proof that TSMC is a solid business

                                                                                                                                                                                                                                                                                                                                                                                            • dmead 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                              He's a grifter. Podcast bro is charitable.

                                                                                                                                                                                                                                                                                                                                                                                              • perfmode 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                Only time will tell

                                                                                                                                                                                                                                                                                                                                                                                                • churchill 19 minutes ago

                                                                                                                                                                                                                                                                                                                                                                                                  On some level, I still see Sam Altman as a visionary, but the moment that got me rolling my eyes was when we traveled to Middle East, reportedly looking to raise up to $7 trillion for AI chips.

                                                                                                                                                                                                                                                                                                                                                                                                  Like, bro, all the sovereign wealth funds of the Arab Gulf hold less than $3 trillion, mostly in illiquid assets that you could take a 50% haircut if you need to liquidate. And after the $100b Vision Fund fiasco where Saudi Arabia was subsidizing "Uber for Dog-walking" startups for years (no thanks to Masayoshi Son), I'm not sure they're going to be keen subsidizing yet another bubble.

                                                                                                                                                                                                                                                                                                                                                                                                  But, isn't that the default in VC? Aggressive, toxic optimism because it's not your money and you need to promise otherworldly yield to get capital.

                                                                                                                                                                                                                                                                                                                                                                                                  • yieldcrv 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                    a little bit of column a, a little bit of column b

                                                                                                                                                                                                                                                                                                                                                                                                    factors are that international markets rely less on local cults of personality and just the financial, it’s also fair to say that TSMC is risk averse, most amusingly is that these personality cults work out fine if you throw enough money at them so I can’t fault this podcast bro for trying

                                                                                                                                                                                                                                                                                                                                                                                                    • drawkward 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                      More cryptobro, imo

                                                                                                                                                                                                                                                                                                                                                                                                      • apwell23 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                        What PG see in him as a teenager to proclaim

                                                                                                                                                                                                                                                                                                                                                                                                        he is 'Michael jordan of listening' or that he knew within 3 mins that sam is next bill gates.

                                                                                                                                                                                                                                                                                                                                                                                                        These are actual paul graham quotes not something from Silicon Valley TV show.

                                                                                                                                                                                                                                                                                                                                                                                                        Edit: to ppl responding "you should already know know paul graham is a grifter" . I don't believe is the case given his essays about meritocracy.

                                                                                                                                                                                                                                                                                                                                                                                                        • leobg 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                          I imagine that means that he observes a lot and thinks a lot before he acts. Which is great. I just wish that he had some higher God than fame and power.

                                                                                                                                                                                                                                                                                                                                                                                                          • Invictus0 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                            Sequoia wrote 13,000 words about how Sam Bankman-Fried was god's gift to mankind. VCs are not serious.

                                                                                                                                                                                                                                                                                                                                                                                                            > Bankman-Fried's first pitch meeting with the VC firm's partners during its Series B round is also included in the profile, in which the founder laid out his vision to investors for FTX to become a super-app — all while he was playing the video game "League of Legends" on his computer.

                                                                                                                                                                                                                                                                                                                                                                                                            • snapcaster 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                              You don't see the grift here? People collude to hype eachother up to increase prestige and social status. It sounds like you take things said in public by billionaires at face value. I would suggest you stop doing that

                                                                                                                                                                                                                                                                                                                                                                                                            • asn1parse an hour ago

                                                                                                                                                                                                                                                                                                                                                                                                              lolololololololololololololol

                                                                                                                                                                                                                                                                                                                                                                                                              • zooq_ai 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                                Remember when the Russians mocked Elon Musk for his rocket purchase?

                                                                                                                                                                                                                                                                                                                                                                                                                Remember when Blockbuster mocked Netflix?

                                                                                                                                                                                                                                                                                                                                                                                                                Remember when Steve Ballmer mocked the iPhone?

                                                                                                                                                                                                                                                                                                                                                                                                                Remember when Zuck was mocked for the Metaverse?

                                                                                                                                                                                                                                                                                                                                                                                                                I'm pretty sure Sam would have heard similar mockings like "You can't take on Google behemoth, podcaster bro" before.

                                                                                                                                                                                                                                                                                                                                                                                                                But the sad part is, the newer tech generation is so brainwashed that it takes the side of the mocker instead of the mockee.

                                                                                                                                                                                                                                                                                                                                                                                                                Watch Sam partner with some other fab like Intel and make TSMC irrelevant. Of course HN will then side with Sam

                                                                                                                                                                                                                                                                                                                                                                                                                • dylan604 an hour ago

                                                                                                                                                                                                                                                                                                                                                                                                                  > Remember when Zuck was mocked for the Metaverse?

                                                                                                                                                                                                                                                                                                                                                                                                                  One of these things is not like the others. You listed 3 things proven silly with hindsight, then you post one comment that is mere hopeful wishing.

                                                                                                                                                                                                                                                                                                                                                                                                                  • koolala an hour ago

                                                                                                                                                                                                                                                                                                                                                                                                                    If Meta makes those new AR glasses real it makes up for the Metaverse branding.

                                                                                                                                                                                                                                                                                                                                                                                                                • 5cott0 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                                  more like “anime pfp anon”

                                                                                                                                                                                                                                                                                                                                                                                                                  • redleader55 3 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                                    He seems to be the same kind of individual as Musk: able to bring together teams that build great things, but also able to bring about drama and ego. The whole musical chair act with profit/non-profit OpenAI, and getting rid of many of the folks that created the models they are famous for.

                                                                                                                                                                                                                                                                                                                                                                                                                    • leobg 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                                      I’m sure he’d love that comparison. But Elon is responsible for PayPal, Tesla, SpaceX, OpenAI and now Grok. I have a Starlink antenna on my roof because of him, and I take my kids to school safely and quietly in a car that, 15 years ago, someone willing to spend 20 million dollars would not have been able to buy. Sam? An iris scanning crypto scam. I think that’s all he ever got off the ground. With ChatGPT, he positioned himself at the right place at the right time. And by dubious means, too.

                                                                                                                                                                                                                                                                                                                                                                                                                      • jcranmer 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                                                                        > Elon is responsible for PayPal

                                                                                                                                                                                                                                                                                                                                                                                                                        Actually, he was responsible for the competitor that PayPal bought out because the market wasn't big enough for two companies, and was forced out of PayPal because he was pushing ruinously bad ideas.