• hyperpape 4 minutes ago

    This is an interesting analysis, but "are the costs of AI agents also rising exponentially is?" is a very bad question that this doesn't answer.

    What's rising exponentially is the price of the most ambitious thing cutting edge agents can do.

    But to answer whether the cost of AI agents is rising in general, you would take a fixed set of problems, and for each of them, ask "once it's solvable, how does the price change?"

    For that latter question, there isn't a lot of data in these charts because there aren't enough curves for models of the same family over time, but it does look like there are a number of points where newer models solve the same problems at lower prices. Look at GPT5 vs. the older GPT models--the curve for GPT5 is shifted left.

    • smusamashah 7 hours ago

      Once a model is stable and good enough, for example Sonnet 4.6 or GPT 5.4 (or something else in future), it can be burned into hardware like Talaas chip reducing the cost many times and increasing the speed. At some point we can rely on old model while being productive with it.

      • 535188B17C93743 an hour ago

        I always wondered why the equivalent of integrated mining didn't apply to LLM inference... now it turns out it does and there's a company making it fast and robust!

        • radialstub 6 hours ago

          No, burning models into hardware won't make them faster or reduce the cost. It will cost way more for similar performance as what you would get with a gpu. I am not telling you why, you can go figure that out on your own.

          • smusamashah 6 hours ago

            But isn't this happening here https://taalas.com/ already. They have a demo of llama running at 17000 tokens per second https://chatjimmy.ai/

            • margalabargala an hour ago

              You mean the person saying "I won't tell you why" might not know what they're talking about?! Say it ain't so.

              • gjsman-1000 5 hours ago

                With some research, that chip appears like it would cost about $300-$400 to manufacture, die only.

                For an 8B parameter model.

                Opus is estimated at 500B-2T parameters. At that scale you’re past reticle limits and need HBM and multi-die packaging, which means you’ve essentially built an inference ASIC (like Groq or Etched) rather than something categorically cheaper than GPUs. The “burned into silicon” advantage mostly evaporates at frontier scale.

                • mixermachine 2 hours ago

                  The cutting edge, max size models will likely stay in the GPU space for a long time. But these models are not needed for most general requests. With a fine tuned 30B quantisized model you can serve a large portion of requests with around 32GB of RAM. Free users will likely only get these kinds of models.

                  At some point we will get these models in hardware and the cost per token will be minimal.

                  • zozbot234 2 hours ago

                    > With a fine tuned 30B quantisized model you can serve a large portion of requests with around 32GB of RAM. Free users will likely only get these kinds of models.

                    These are exactly the kinds of models that you can easily run locally by repurposing existing hardware. Depending on how much you're willing to wait for the answer, running local even gives you strictly better outcomes for simple Q&A queries.

                    (Long-context and agentic use cases are admittedly much harder to fit under that model, since non-AI uses for the high-end hardware you'd realistically need for those are rather more limited, and they're hit by the ongoing hardware shortage.)

                  • tomrod 4 hours ago

                    Does the cost scale linearly/superlinearly? What does the $300-$400 price data point tell us with relationship to the parameter density?

                    No gotchas here. I genuinely don't know that 8B parameters is in a zone with significant decreasing marginal returns -- too far out of my knowledge area but genuinely curious.

                    • avidiax 4 hours ago

                      Die size increases cost exponentially, by decreasing chips per wafer and decreasing yield.

                      I expect that this kind of burned-in model is also very difficult to verify (how do you know if some of the weights are off), and not amenable to partial disablement to increase yield. For CPUs, you just laser disable bad cores. Can't forego part of a neural net.

            • zipy124 6 hours ago

              The crazy part about this is if you compare it not to US wages but european, for instance in the UK where the median software hourly wage is somewhere around $35-40 an hour, then humans are already cheaper than the best models.

              • allan_s 6 hours ago

                that's what I'm telling my non-tech friends when they say "Looking to how fast AI progress and robotics, me as an electrician, will a robot soon take my job?" I reply to them "My job as a software engineer will be replaced sooner than yours, because for your job, the robot will be much more expensive than the minimal wave, and you don't need to buy a human"

                • drzaiusx11 3 hours ago

                  I wouldn't discount the worries outside of tech however. A cheap human laborer that leans on AI to provide their checklist and described actions for tasks is definitely in scope to replace hard won hands on knowledge from experience in industry professionals. You no longer have to watch 30 YouTube videos to learn and distill a task as a layperson in a field involving manual labor.

                  I rebuilt my house from the studs, did my own electrical and plumbing, etc. This took a significant amount of training and research back in the day. I worked under my father for a decade before making this attempt. My father is a journeyman electrician and carpenter. I think any able bodied human could soon forgo much of that and simply get a breakdown of actions to perform in a particular order and get similar results.

                  • darepublic 2 hours ago

                    I tried to use gpt for various handy work. While it does help I don't think it can adequately substitute for hard won hands. Maybe next gen if you provide a video stream and the llm can view the exact situation. Even then though I wouldn't discount the difficulty of learning dexterity when you've been a coddled white collar worker your whole life

                    • drzaiusx11 2 hours ago

                      I wasn't suggesting white collar workers attempt blue collar work. I'm merely saying that cheap day laborers with basic experience won't have to lean on their industry mentorship model (journeyman etc) as much and can complete jobs on their own. On the cheap.

                      Today's models are insufficient for someone with 0 hands on experience, especially when limited to text modalities. However, I don't doubt the future ones you describe are coming though, if they're not already here.

                      • boston_clone an hour ago

                        Is your opinion here grounded in experience from working in that field, or is it speculation?

                • segmondy 4 hours ago

                  Humans are not cheaper than AI models. Let's go with $35 an hour.

                  24365 = 8760 8760$35 = $306,600

                  Yeah, a human working non stop will run $300k.

                  Now you said, the "best" models. I personally reckon that 80-90% of most work don't need the best models. They need a good model, and good models are super cheap. i.e, the tiny gemma4 or qwen3.6 models will be sufficient for most of those work.

                  AI cloud usage cost goes up near linearly, but local cost doesn't. So say someone built an under $10k system, with perhaps dual RTX 5090. That same system will be able to easily run 20 parallel requests. The only cost is electricity. You can run it 24/7. For 1 year, that's ~$6million. 20 humans will also have overhead of electricity, real estate and other things which far exceed the cost of electricity for just AI.

                  The thing AI agents are lacking is agency and autonomy. As they get closer and closer, the majority of humans competing in the same sort of tasks will have no chance.

                  • ac29 2 hours ago

                    > So say someone built an under $10k system, with perhaps dual RTX 5090. That same system will be able to easily run 20 parallel requests. The only cost is electricity. You can run it 24/7. For 1 year, that's ~$6million

                    I dont see how you get anywhere close to $6M of tokens out of a pair of 5090s. The class of model they could run is fairly small and extremely cheap to run via API (my math says running Gemma4-31B for 24 hours costs less than $1 on OpenRouter). Even with 20x concurrent requests you are orders of magnitude away from $6M/yr.

                    • segmondy an hour ago

                      I never said that, my point is that paying 20 people at $35 for 24/7 is about $6 million. You can replace that with a $10k system running 20 parallel requests for a year and save lots of money.

                    • zipy124 4 hours ago

                      I'm going by the graph in the original artical, not stating a point. I mean if their cost lines on the graph are to be believed, then the number I quoted is cheaper.

                    • GorbachevyChase 5 hours ago

                      I have a lot of AI written software, and it doesn’t cost me anywhere close to what I’ve been quoted for other software projects in the past. I’ve had a guy spend over six months, full-time, on a CRUD application for permits. He didn’t even finish. I made a working prototype in Django, which was tossed to re-implement in PHP for some reason.

                      • avidiax 4 hours ago

                        My understanding is that this is normalized to the "best human" for the tasks.

                        An AI only doing a task correctly 50% of the time may in-fact be better than your N% chance of hiring a highly capable human for that task, and especially for contracting a human to a 1-2 hour task.

                        But your successful use of AI is still predicated on a human who can judge output and break the work into smaller tasks that fit the skill ceiling of the AI, which is currently no more than tasks that take a skilled human 2 hours.

                    • easygenes 12 hours ago

                      While I understand why they used the METR data, a cleaner look would be against the current cost-optimal frontier of open models (e.g. GLM-5.1 and MiniMax-M2.7). That paints a very different picture. Comparing just the frontier models at the time of the METR report invariably leads to looking at providers who are pushing the limits of cost at the time of the report.

                      GPT-5 was shown as being on the costly end, surpassed by o3 at over $100/hr. I can't directly compare to METR's metrics, but a good proxy is the cost of the Artificial Analysis suite. GLM-5.1 is less than half the cost to complete the suite of GPT-5 and is dramatically more capable than both GPT-5 and o3.

                      So while their analysis is interesting, it points towards the frontier continuing to test the limits of acceptable pricing (as Mythos is clearly reinforcing) and the lagging 6-12 months of distillation and refinement continuing to bring the cost of comparable capabilities to much more reasonable levels.

                      • avidphantasm 7 hours ago

                        Calculating hourly costs for these models makes me think that the decision of when to hire an SWE vs. increase use of AI may follow a similar pattern to the decision to use cloud compute vs. on-premises. I don’t cost $120/hr (incl. fringe), but my employer pays my salary all year long, no matter if I am working or on vacation. Whereas if they use an AI model to do the same work, they may be happy to pay $120/hr or more, since they may only use the model for a small fraction of 2080 hours per year, so they’d still save money, and not have a messy human to deal with.

                        • bbatha 7 hours ago

                          I remain convinced we won’t look at project estimates as time based in software engineering as our primary cost estimate. And this is transition will happen rapidly. We’re going to shift to a capex/token spend model for project estimates where the business will say “ok I do want that feature for $1000 in tokens”.

                          • onchainintel 6 hours ago

                            I agree with you directionally that project estimates are/will be affected by this but I don't see a scenario in which time is completely removed from the equation with respects to projects & estimates to execute on them. We're all constrained by time, finite resource. It's always a factor in business.

                      • JAG_Ecalona 3 hours ago

                        The sweet spot thing is the real insight here and nobody seems to be talking about it.

                        Frontier models get hyped for their maximum task horizon, but that's also where they're 10-30x more expensive per hour than their optimal range. You're paying a massive premium for the hardest tasks and still failing half the time.

                        Honestly the practical takeaway is pretty boring: just break your work into smaller chunks. Not because the models can't handle longer tasks, but because the economics at shorter task lengths are just way better. The labs are racing to push the horizon out; the smart move for anyone actually paying the bills is to stay near the sweet spot and orchestrate from there.

                        • drzaiusx11 3 hours ago

                          Model specialization is in all likelihood going to be the way forward, both for cost and quality of output. Smaller, cheaper models specialized in their task domains. Many of the current model vendors are already (attempting) to do this under the hood.

                          Generalist models have similar problems as generalist humans. The proverbial "Jack of all trades, master of none."

                          That said, I've made my career as a generalist :)

                          • margalabargala an hour ago

                            Anyone trying to decide which of 30 different specialized models best fits their task has already failed.

                            Maybe the future of the backend is specialized models but the future of what faces the user is what appears to be a generalist model. Maybe it does things itself, maybe it just knows how to route to the specialist models, but the UX of a generalist model will win.

                          • zozbot234 2 hours ago

                            Small chunks of work start to become viable for local agentic use too. The O(N^2) dependence on context length really makes the "maximum tasks" a complete non-starter locally.

                          • thelastgallon 16 hours ago

                            > On many task lengths (including those near their plateau) they cost 10 to 100 times as much per hour. For instance, Grok 4 is at $0.40 per hour at its sweet spot, but $13 per hour at the start of its final plateau. GPT-5 is about $13 per hour for tasks that take about 45 minutes, but $120 per hour for tasks that take 2 hours. And o3 actually costs $350 per hour (more than the human price) to achieve tasks at its full 1.5 hour task horizon. This is a lot of money to pay for an agent that fails at the task you’ve just paid for 50% of the time — especially in cases where failure is much worse than not having tried at all.

                            • nopinsight 12 hours ago

                              Ord's frontier-cost argument is right as far as it goes, but the piece doesn't engage with the counter-trend: inference cost for a fixed capability level has been falling faster than Moore's law. Pushing the frontier will likely keep getting more expensive and concentrated among a few players, while the intelligence needed for more mundane tasks keeps getting cheaper.

                              That raises a question: if practical-tier inference commoditizes, how does any company justify the ever-larger capex to push the frontier?

                              OpenAI's pitch is that their business model should "scale with the value intelligence delivers." Concretely, that means moving beyond API fees into licensing and outcome-based pricing in high-value R&D sectors like drug discovery and materials science, where a single breakthrough dwarfs compute cost. That's one possible answer, though it's unclear whether the mechanism will work in practice.

                              • popcorncowboy 4 hours ago

                                > how does any company justify the ever-larger capex to push the frontier

                                AGI. [waves hands at the infinite money machine]

                              • zozbot234 12 hours ago

                                This effect is likely even larger when you consider that the raw cost per inferred token grows linearly with context, rather than being constant. So longer tasks performed with higher-context models will cost quadratically more. The computational cost also grows super-linearly with model parameter size: a 20B-active model is more than four times the cost of a 5B-active model.

                                • tibbar 12 hours ago

                                  Doesn't context cacheing mostly eliminate this problem? (I suppose for enough context the 90% discount is eventually a lot anyway)

                                  • zozbot234 12 hours ago

                                    Context caching is really storing the KV-cache for reuse. It saves running prefill for that part of the context, but tokens referencing that KV-cache will still cost more.

                                • naveen99 5 hours ago

                                  Where are you getting hourly costs for private models ? The rate limits are pretty arbitrary. If you max out by api tokens it would be like $10k / hour

                                  • boxedemp 13 hours ago

                                    If you gave me an agent that succeeded 50% of tasks I gave it, I could take over the world in a week. Faster if I wasn't so lazy.

                                    I think you're overestimating, or oversimplifying. Maybe both.

                                    • jurgenburgen 10 hours ago

                                      > If you gave me an agent that succeeded 50% of tasks I gave it, I could take over the world in a week. Faster if I wasn't so lazy.

                                      Assuming you used o3, that would cost $58800 per week. That’s an expensive bet for only 50% odds in your favor.

                                      Of course the agents are only that good on benchmarks, in reality your odds are worse. Maybe roulette instead?

                                      • raincole 12 hours ago

                                        No one is claiming an agent can do 50% of arbitrary tasks. It's just 50% of METR's benchmark set.

                                        > I think you're overestimating, or oversimplifying

                                        Yeah if you only read comments on HN but not the actual linked article you will get oversimplified conclusion. Like, duh?

                                        • TeMPOraL 9 hours ago

                                          > Yeah if you only read comments on HN but not the actual linked article you will get oversimplified conclusion. Like, duh?

                                          Curiously, for most submissions it's the opposite - comments are much more useful and nuanced than the source being discussed.

                                          • boxedemp 12 hours ago

                                            Sorry for stating something so obvious. I'll comment less from now on.

                                      • dang 20 hours ago

                                        Related ongoing thread:

                                        Measuring Claude 4.7's tokenizer costs - https://news.ycombinator.com/item?id=47807006 (309 comments)

                                        • ting0 8 hours ago

                                          No, but the AI labs would love to frame it this way so they can continue to nerf models and increase prices while they use the cheap, highly performant, highly powerful models internally to replace all of your businesses.

                                          • onchainintel 6 hours ago

                                            Sure is looking that way. What can't Claude do at this point?

                                            • rickandmorty99 5 hours ago

                                              I'm an AI engineer with a computer science and some actual AI background. I am trying to make Claude good motivation letters for applying to jobs. It currently scores a 6 out of 10. I'm much better still. And it has access to all the relevant parts of my psychology degree and data about writing good motivation letters.

                                              All I can say is: the motivation letters don't look like they're written by AI anymore.

                                              • kaoD 5 hours ago

                                                > What can't Claude do at this point?

                                                Writing maintainable code that scales.

                                            • greenmilk 18 hours ago

                                              Are any inference providers currently making profit (on inference, I know google makes money)?

                                              • wsun19 17 hours ago

                                                Pretty much every major American inference provider claims to make a profit on API-based inference. Consumer plans might be subsidized overall, but it's hard to say since they're a black box and some consumers don't fully use their plans

                                                • henry2023 16 hours ago

                                                  Third parties selling open-weight inference on OpenRouter are surely selling on a profit. Zero reason to subsidize it.

                                                  • wavemode 17 hours ago

                                                    Selling inference is not fundamentally different from selling compute - you amortize the lifetime cost of owning and operating the GPUs and then turn that into a per-token price. The risk of loss would be if there is low demand (and thus your facilities run underutilized), but I doubt inference providers are suffering from this.

                                                    Where the long-term payoff still seems speculative, is for companies doing training rather than just inference.

                                                    • Gigachad 16 hours ago

                                                      There’s a lot of debate over what the useful lifespan of the hardware is though. A number that seems very vibes based determines if these datacenters are a good investment or disastrous.

                                                      • hypercube33 15 hours ago

                                                        I specifically remember this debate coming up when the H100 was the only player on the table and AMD came out with a card that was almost as fast in at least benchmarks but like half the cost. I haven't seen a follow up with real world use though and as a home labber I know that in the last three weeks the support for AMD stuff at least has gotten impressively useful covering even cuda if you enjoy pain and suffering.

                                                        What I'm curious about are what about the other stuff out there such as the ARM and tensor chips.

                                                    • dannersy 9 hours ago

                                                      If they were they would show evidence because they'd pull in more investment. I don't believe their claim that they make profits on inference, especially not with reports like this coming out.

                                                      • raincole 15 hours ago

                                                        All of them. It's simply impossible to sell tokens by usage at a loss now. You'll be arbitraged to death in a few days. It only makes sense to subsidize cost if you're selling a subscription.

                                                        • avidiax 4 hours ago

                                                          How do you arbitrage closed weight models? Who would buy from a middle man at increased price? Who is offering Priceline but for tokens?

                                                        • jagged-chisel 18 hours ago

                                                          Google definitely makes money in other areas. Do they make money on inference?

                                                        • quicklywilliam 17 hours ago

                                                          Interesting read. I don't know if I quite buy the evidence, but it's definitely enough to warrant further investigation. It also matches up with my personal experience, which is that tools like Claude Code are burning through more and more tokens as we push them to do bigger and bigger work. But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.

                                                          So: I buy that the cost of frontier performance is going up exponentially, but that doesn't mean there is a fundamental link. We also know that benchmark performance of much smaller/cheaper models has been increasing (as far as I know METR only looks at frontier models), so that makes me wonder if the exponential cost/time horizon relationship is only for the frontier models.

                                                          • esperent 14 hours ago

                                                            > But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.

                                                            Do we? Because elsewhere in the thread there's people claiming they are profitable in API billing and might be at least close to break even on subscription, given that many people don't use all of their allowance.

                                                            • ai-x 12 hours ago

                                                              Anthropic has 50% gross margins on their tokens.

                                                              Step 1) Bubble callers will be proven wrong in 2026 if not already (no excess capacity)

                                                              Step 2) Models are not profitable are proven wrong (When Anthropic files their S1)

                                                              Step 3) FOMO and actual bubble (say around 2028/29)

                                                              • dminik 8 hours ago

                                                                If they had such a high margin, they wouldn't need to fuck around with token usage/pricing every three days.

                                                                I have no data to support this, but I think they just about break even on API usage and take overall loss on subscriptions/free plans.

                                                                • ai-x 2 hours ago

                                                                  Math / Economics 101 thought experiment.

                                                                  You have (limited) 100 Coke cans to sell (that you bought for say $1)

                                                                  There are two large lines being formed for that. One line is offering an average $3 per bottle and another line is offering an average $2 per bottle.

                                                                  Tell me which line they would throttle/starve even though they make a profit out of it.

                                                                  Also, when the lines were formed you had no idea of the average price, but now you are getting a clear picture. Would you change your strategy / pricing or stick with your original "give the bottle to everyone for the same initial $1 price"

                                                                  • dminik 39 minutes ago

                                                                    If I owned two lines both selling the same thing (preumably here Coke is a stand-in for compute), I would throttle the $2 dollar line. People without a choice might move to the $3 dollar line.

                                                                    Unfortunately, back in the real world, Anthropic is dealing with two issues:

                                                                    1. They're throttling all lines. Their latest model uses more tokens overall. Tokens are being rationed and context is being lowered.

                                                                    2. There's another line for Pepsi right over there. And it costs $1.25 per can.

                                                                    Anthropic should be lowering their price to compete with OpenAI, but they're not. They're making it even more expensive.

                                                                    So tell me, does that really look like Anthropic is running a (as some people say) >50% profit margin?

                                                                • 2848484995 10 hours ago

                                                                  Can we see them?

                                                            • agentifysh 17 hours ago

                                                              Until there is some drastic new hardware, we are going to see a similar situation to proof of work, where a small group hordes the hardware and can collude on prices.

                                                              Difference is that the current prices have a lot of subsidies from OPM

                                                              Once the narrative changes to something more realistic, I can see prices increase across the board, I mean forget $200/month for codex pro, expect $1000/month or something similar.

                                                              So its a race between new supply of hardware with new paradigm shifts that can hit market vs tide going out in the financial markets.

                                                              • jiggawatts 13 hours ago

                                                                > Until there is some drastic new hardware

                                                                For inference, there is already a 10x improvement possible over a setup based on NVIDIA server GPUs, but volume production, etc... will take a while to catch up.

                                                                During inference the model weights are static, so they can be stored in High Bandwidth Flash (HBF) instead of High Bandwidth Memory (HBM). Flash chips are being made with over 300 layers and they use a fraction of the power compared to DRAM.

                                                                NVIDIA GPUs are general purpose. Sure, they have "tensor cores", but that's a fraction of the die area. Google's TPUs are much more efficient for inference because they're mostly tensor cores by area, which is why Gemini's pricing is undercutting everybody else despite being a frontier model.

                                                                New silicon process nodes are coming from TSMC, Intel, and Samsung that should roughly double the transistor density.

                                                                There's also algorithmic improvements like the recently announced Google TurboQuant.

                                                                Not to mention that pure inference doesn't need the crazy fast networking that training does, or the storage, or pretty much anything other than the tensor units and a relatively small host server that can send a bit of text back and forth.

                                                                • zozbot234 12 hours ago

                                                                  > Flash chips are being made with over 300 layers and they use a fraction of the power compared to DRAM.

                                                                  Isn't reading from flash significantly more power intensive than reading DRAM? Anyway, the overhead of keeping weights in memory becomes negligible at scale because you're running large batches and sharding a single model over large amounts of GPU's. (And that needs the crazy fast networking to make it work, you get too much latency otherwise.)

                                                                  • jiggawatts 10 hours ago

                                                                    For a given capacity of memory, Flash uses far less power than DRAM, especially when used mostly for reads.

                                                                    > becomes negligible at scale

                                                                    Nothing is negligible at scale! Both the cost and power draw of the HBMs is a limiting factor for the hyperscalers, to the point that Sam Altman (famously!) cornered the market and locked in something like 40% of global RAM production, driving up prices for everyone.

                                                                    > sharding a single model over large amounts of GPUs

                                                                    A single host server typically has 4-16 GPUs directly connected to the motherboard.

                                                                    A part of the reason for sharding models between multiple GPUs is because their weights don't fit into the memory of any one card! HBF could be used to give each GPU/TPU well over a terabyte of capacity for weights.

                                                                    Last but not least, the context cache needs to be stored somewhere "close" to the GPUs. Across millions of users, that's a lot of unique data with a high churn rate. HBF would allow the GPUs to keep that "warm" and ready to go for the next prompt at a much lower cost than keeping it around in DRAM and having to constantly refresh it.

                                                                    • zozbot234 10 hours ago

                                                                      > For a given capacity of memory, Flash uses far less power than DRAM, especially when used mostly for reads.

                                                                      Flash has no idle power being non-volatile (whereas DRAM has refresh) but active power for reading a constantly-sized block is significantly larger for Flash. You can still use Flash profitably, but only for rather sparse and/or low-intensity reads. That probably fits things like MoE layers if the MoE is sparse enough.

                                                                      Also, you can't really use flash memory (especially soldered-in HBF) for ephemeral data like the KV context for a single inference, it wears out way too quickly.

                                                                      • adrian_b 9 hours ago

                                                                        Modern flash memory, with multi-bit cells, indeed requires more power for reading than DRAM, for the same amount of data.

                                                                        However, for old-style 1-bit per cell flash memory I do not see any reason for differences in power consumption for reading.

                                                                        Different array designs and sense amplifier designs and CMOS fabrication processes can result in different power consumptions, but similar techniques can be applied to both kinds of memories for reducing the power consumption.

                                                                        Of course, storing only 1 bit per cell instead of 3 or 4 reduces a lot the density and cost advantages of flash memory, but what remains may still be enough for what inference needs.

                                                                        • zozbot234 5 hours ago

                                                                          The basic physics of reading from Flash vs. DRAM are broadly similar, and it's true that reading from SLC flash is a bit cheaper, but you'll still need way higher voltages and reading times to read from flash compared to DRAM. It's not really the same.

                                                                • colechristensen 16 hours ago

                                                                  Doubtful, local models are the competitive future that will keep prices down.

                                                                  128GB is all you need.

                                                                  A few more generations of hardware and open models will find people pretty happy doing whatever they need to on their laptop locally with big SOTA models left for special purposes. There will be a pretty big bubble burst when there aren't enough customers for $1000/month per seat needed to sustain the enormous datacenter models.

                                                                  Apple will win this battle and nvidia will be second when their goals shift to workstations instead of servers.

                                                                  • hypercube33 15 hours ago

                                                                    Weird how you're leaving stuff like Strix Halo out. Also weird you think 128gb is the future with all of the research done to reduce that to something around 12GB being a target with all of these papers out now. I assume we'll end up with less general purpose models and more specific small ones swapped out for whatever work you are asking to do.

                                                                    • MrBuddyCasino 13 hours ago

                                                                      Strix Halo hasn‘t got nearly enough bandwidth, its just 256bit.

                                                                      • Tepix 12 hours ago

                                                                        It‘s sufficient for some MoE models.

                                                                    • lookaround 16 hours ago

                                                                      > 128GB is all you need.

                                                                      My guy, look around.

                                                                      They are coming for personal compute.

                                                                      Where are you going to get these 128GBs? Aquaman? [0]

                                                                      The ones who make RAM are inexplicably attaching their fate to the future being all LLMs only everywhere.

                                                                      [0] https://www.youtube.com/watch?v=0-w-pdqwiBw

                                                                      • naveen99 16 hours ago

                                                                        Cloud can’t make money off of you and pay more than you for the hardware at the same time.

                                                                        • adrianN 14 hours ago

                                                                          Batch inference is much more efficient. Using the hardware round the clock is much more efficient. Cloud can absolutely pay more for hardware and still make money off you.

                                                                          • bitwize 14 hours ago

                                                                            Cloud can pay more for RAM until all the RAM producers withdraw from the consumer market, then prices will go back down.

                                                                            End users will still get access to RAM. The cloud terminal they purchase from Apple, Google, Samsung, or HP will have all the RAM it will ever need directly soldered onto it.

                                                                            • naveen99 5 hours ago

                                                                              Ram upgrades are happening because of ddr5. Nvme upgrades are happening because of pcie5. Prices will come down once everyone is done upgrading.

                                                                              • xantronix 13 hours ago

                                                                                I was really fucking hoping we weren't at the part where "cloud terminals" doesn't seem farfetched and paranoid and yet here we are. Jesus Christ.

                                                                                • bitwize 11 hours ago

                                                                                  The next step, I think, will be a "cash for clunkers" program to permit people to trade in old computer hardware to the government—especially since operating systems that do not collect KYC data on their users will soon be illegal to operate.

                                                                                • seanmcdirmid 14 hours ago

                                                                                  Doesn’t Apple place RAM directly into the SoC package? We aren’t even talking about soldering it to mother boards anymore, it is coming in with the CPU like it would as a GPU.

                                                                              • foota 16 hours ago

                                                                                More like RAM producers are providing supplies to the highest bidder, no? If this doesn't peter out supply will normalize at a higher but less insane price eventually.

                                                                          • lwhi 8 hours ago

                                                                            I think an interesting counterpoint, is whether the value obtained is reducing.

                                                                            • jillesvangurp 6 hours ago

                                                                              Exactly. AI is not really replacing people but it's definitely allowing them to do more and more interesting things. You should offset the cost of having an AI do something against the cost of doing that manually. Your mileage may vary of course. But I am definitely getting things done that I wouldn't even have started without AI assistance. And that stuff is valuable to me. Although you could argue that anything AI can do is actually deflating in value as well. The economics here will get pretty interesting. But all things considered, I'm not spending an unreasonable amount on all this AI stuff. Probably around 60-100$/month currently. It varies a bit.

                                                                              • lwhi 5 hours ago

                                                                                Value feels pretty relative to me. If anyone can do a 'thing', is that thing worth less?

                                                                                • jillesvangurp 5 hours ago

                                                                                  If you still need that thing done, the value is basically however you value your time. Would you pay extra for having someone or something do that for you instead?

                                                                                  • lwhi 15 minutes ago

                                                                                    But is there as much value in the doing the thing in the first place if anyone else can do it cheaply too?

                                                                            • matt3210 16 hours ago

                                                                              I took a month break and my side project took 2x as much tokens

                                                                              • siliconc0w 14 hours ago

                                                                                Working on a oss tool to help orgs identify where they can save on token costs: https://repogauge.org

                                                                                Happy to run it on your repos for a free report: hi@repogauge.org

                                                                                • twaldin 9 hours ago

                                                                                  idk over my testing, glm-5 inside opencode beats all other agents head to head

                                                                                  • EdvinPL 8 hours ago

                                                                                    AI feels more like a gamble. People like gambling. From casinos (win-loose), to lootboxes (uncertainty) or even extramarital sex (whose baby is it?).

                                                                                    This way - AI work is like a slot machine - will this work or not? Either way - casino gets paid and casino always wins.

                                                                                    Nevertheless - if the idea or product is very good (filling high market pain) and not that difficult to build - it can enable non-coders to "gamble" for the outcome with AI for $.

                                                                                    Sadly - from by experiences hiring Devs - hiring people is also a gamble...

                                                                                    • ketzu 7 hours ago

                                                                                      > or even extramarital sex (whose baby is it?).

                                                                                      This is the weirdest example of "gambling" I have seen in my life. If you'd've written "unprotected sex" I'd see the gambling part, but "extramartial sex" covers so much more than the tiny subset of "whose baby is it" (how many people are there having sex to gamble on who will be the father of a baby? 10?).

                                                                                      This made my day.

                                                                                      • sh4rks 7 hours ago

                                                                                        Writing code by hand is gambling (will it compile or not, will it pass code review or not)

                                                                                        • stavros 7 hours ago

                                                                                          Under this definition, everything is gambling, including commenting on HN (will I get upvoted or downvoted?).

                                                                                          • pocksuppet 5 hours ago

                                                                                            There used to be forums without voting. It was discovered that forums with voting attract more engagement because of the emotions produced by the voting.

                                                                                            • BarryMilo 4 hours ago

                                                                                              It also used to be that reddit comments were the epitome of quality in their time, much closer to current HN if not better. I attributed that to the voting mechanism; clearly I was mistaken.

                                                                                            • sirl1on 6 hours ago

                                                                                              Everything is modeled after gambling nowadays to nudge and gamify the user experience to a desired outcome

                                                                                              • stavros 5 hours ago

                                                                                                If your definition of "gambling" is "things with uncertain outcome", then there's nothing in life that's not gambling.

                                                                                          • noosphr 15 hours ago

                                                                                            Yet again: Transformers are fundamentally quadratic.

                                                                                            If they can do a task that takes 1 unit of computation for 1 dollar they will cost 100 dollars for a 10 unit task and 10,000 for a 100 unit task.

                                                                                            Project costs from Claude Code bear this out in the real world.

                                                                                            • keepamovin 12 hours ago

                                                                                              My expectation: demand going up, prices will rise, supply will saturate to the point of ubiquitous "utility" status, and prices will drop, probably a bell curve shape with sine-wave undulations along the way.

                                                                                              • chii 11 hours ago

                                                                                                > supply will saturate

                                                                                                that depends on the ability to produce supply at a saturation rate.

                                                                                                It did work for internet backhaul links - ala, those dark fibres. However, i reckon those fibres are easier to manufacture than silicon chips.

                                                                                                I wonder if saturation is possible for ai capable chips.

                                                                                                • keepamovin 2 hours ago

                                                                                                  no concerns for the hardware chain long view. saturate is AI as utility ubiquitous in everything.

                                                                                              • stainablesteel 3 hours ago

                                                                                                it's not like cost and energy use aren't competitive factors in this game

                                                                                                the first model to outcompete its competitors while using less compute would be purchased more than anything else