• princealiiiii 19 hours ago

    Any app built on top of these model providers could become a competitor to these providers. Since the model providers are currently in the lowest-margin part of the business, it is likely they will try to expand in to the app layer and start pulling the rug from under these businesses built on top of them.

    Amazon had a similar tactic, where it would use other sellers on its marketplace to validate market demand for products, and then produce its own cheap copies of the successes.

    • jsnell 19 hours ago

      The model providers are not in the low margin part of the business. The unit economies of paid-per-token APIs are clearly favorable, and scale amazingly well as long as you can procure enough compute.

      I think it's the subscription-based models that are tricky to make work in the long term, since they suffer from adverse selection. Only the heaviest users will pay for a subscription, and those are the users that you either lose money on or make unhappy with strict usage limits. It's kind of the inverse of the gym membership model.

      Honestly, I think the subscriptions are mainly used as a demand moderation method for advanced features.

      • raincole 19 hours ago

        > The model providers are not in the low margin part of the business.

        Many people believe that model providers are running at negative margin.

        (I don't know how true it is.)

        • jsnell 18 hours ago

          Yes, many people believe that, but it doesn't seem to be an evidence-based belief. I've written about this in some detail[0][1] before. But since just linking to one's own writing is a bit gauche and doesn't make for a good discussion, I'll summarize :)

          1. There is no point in providing paid APIs at negative margins, since there's no platform power in having a larger paid API share (paid access can't be used for training data, no lock-in effects, no network effects, no customer loyalty, no pricing power on the supply side since Nvidia doesn't give preferential treatment to large customers). Even selling access at break-even makes no sense, since that is just compute you're not using for training, or not selling to other companies desperate for compute.

          2. There are 3rd-party providers selling only the compute, not models, who have even less reason to sell at a loss. Their prices are comparable to 1st-party providers.

          3. Deepseek published their inference cost structure for R1. According to that data their paid API traffic is very lucrative (their GPU rental costs for inference are under 20% of their standard pricing, i.e. >80% operating margins; and the rental costs would cover power, cooling, depreciation of the capital investment).

          Insofar as frontier labs are unprofitable, I think it's primarily due to them giving out vast amounts of free access.

          [0] https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

          [1] https://news.ycombinator.com/item?id=44165521

          • lmeyerov 17 hours ago

            I think you miss 2 big aspects:

            1. High volume providers get efficiencies that low volume do not. It comes from both more workload giving more optimization opportunities, and staffing to do better engineering to begin with. The result is break even for lower volume firms is profitable for higher volume, and as high volume is magnitudes more scale, this quickly pays for many people. By being the high-volume API, this game can be played. If they choose not to bother, it is likely because strategic views on opportunity cost, not inability.

            That's not even the interesting analysis, which is what the real stock value is, or whatever corp structure scheme they're doing nowadays:

            2. Growth for growths sake. Uber was exactly this kind of growth-at-all-costs play, going more into debt with every customer and fundraise. My understanding is they were able to tame costs and find side businesses (delivery, ...), with the threat becoming more about category shift of self-driving. By having the channel, they could be the one to monetize as that got figured out better.

            Whether tokens or something else becomes what is charged for at the profit layers (with breakeven tokens as cost of business), or subsidization ends and competitive pricing dominates, being the user interface to chat and the API interface to devs gives them channel. Historically, it is a lot of hubris to believe channel is worthless, and especially in an era of fast cloning.

            • jsnell 16 hours ago

              > High volume providers get efficiencies that low volume do not

              But paid-per-token APIs at negative margins do not provide scaling efficiencies! It's just the provider giving away a scarce resource (compute) for nothing tangible in exchange. Whatever you're able to do with that extra scale, you would have been able to do even better if you hadn't served this traffic.

              In contrast, the other things you can use the compute for have a real upside for some part of the genai improvement flywheel:

              1. Compute spent on free users gives you training data, allowing the models to be improved faster.

              2. Compute spent on training allows the models to be trained, distilled and fine-tuned faster. (Could be e.g. via longer training runs or by being able to run more experiments.)

              3. Compute spent on paid inference with positive margins gives you more financial resources to invest.

              Why would you intentionally spend your scarce compute on unprofitable inference loads rather than the other three options?

              > 2. Growth for growths sake.

              That's fair! It could in theory be a "sell $2 for $1" scenario from the frontier labs that are just trying to pump up their revenue numbers to fund-raise from dumb money who don't think to at least check on the unit economics. OpenAI's latest round certainly seemed to be coming from the dumbest money in the world, which would support that.

              I have two rebuttals:

              First, it doesn't explain Google, who a) aren't trying to raise money, b) aren't breaking out genai revenue in their financials, so pumping up those revenue numbers would not help at all. (We don't even know how much of that revenue is reported under Cloud vs. Services, though I'd note that the margins have been improving for both of those segments.)

              Second, I feel that this hypothetical, even if plausible, is trumped by Deepseek publishing their inference cost structure. The margins they claim for the paid traffic are high by any standard, and they're usually one of the cheaper options at their quality level.

              • lmeyerov 16 hours ago

                I think you ignored both of my points -

                1. You just negated a technical statement with... I don't even know what. Engineering opportunities at volume and high skill allow changing the margin in ways low volume and low capitalization provider cannot. Talk to any GPU ML or DC eng and they will rattle off ways here. You can claim these opportunities aren't enough, but you don't seem to be willing to do so.

                2. Again, even if tokens are unprofitable at scale (which I doubt), market position means owning a big chunk of the distribution channel for more profitable things. Classic loss leader. Being both the biggest UI + API is super valuable. Eg, now that code as a vertical makes sense, they bought more UI here, and now they can go from token pricing closer to value pricing and fancier schemes - imagine taking on GitHub/Azure/Vercel/... . As each UI and API point takes off, they can devour the smaller players who were building on top to take over the verticals.

                Seperately, I do agree, yes, the API case risks becoming (and staying) a dumb pipe if they fail to act on it. But as much as telcos hate their situation, it's nice to be one.

                • jsnell 15 hours ago

                  I don't think I was ignoring your points. I thought I was replying very specifically to them, to be honest, and providing very specific arguments. Arguments that you, by the way, did not respond to in any way here, beyond calling them "[you] don't even know what". That seems quite rude, but I'll give you the benefit of the doubt.

                  Maybe if you could name one of those potential opportunities, it'd help ground the discussion in the way that you seem to want?

                  Like, let's say that additional volume means one can do more efficient batching within a given latency envelope. That's an obvious scale-based efficiency. But a fuller batch isn't actually valuable in itself: it's only valuable because it allows you to serve more queries.

                  But why? In the world you're positing where these queries are sold at negative margins and don't provide any other tangible benefit (i.e. cannot be used for training), the provider would be even better off not serving those queries. Or, more likely, they'd raise prices such that this traffic has positive margins, and they receive just enough for optimal batching.

                  > You can claim these opportunities aren't enough, but you don't seem to be willing to do so.

                  Why I would claim that? I'm not saying that scaling is useless. I think it's incredibly valuable. But scale from these specific workloads is only valuable because these workloads are already profitable. If it wasn't, the scarce compute would be better off being spent on one of the other compute sinks I listed.

                  (As an example, getting more volume to more efficiently utilize the demand troughs is pretty obviously why basically all the major providers have some sort of batch/off-peak pricing plans at very substantial discounts. But it's not something you'd see if their normal pricing had negative margins.)

                  > Engineering opportunities at volume and high skill allow changing the margin in ways low volume and low capitalization provider cannot.

                  My point is that not all volume is the same. Additional volume from users whose data cannot be used to improve the system and who are unprofitable doesn't actually provide any economies of scale.

                  > 2. Again, even if tokens are unprofitable at scale (which I doubt),

                  If you doubt they're unprofitable at scale, it seems you're saying that they're profitable at scale? In that case I'd think we're actually in violent agreement. Scaling in that situation will provide a lot of leverage.

                  • lmeyerov 14 hours ago

                    > But why? In the world you're positing where these queries are sold at negative margins and don't provide any other tangible benefit (i.e. cannot be used for training), the provider would be even better off not serving those queries. Or, more likely, they'd raise prices such that this traffic has positive margins, and they receive just enough for optimal batching. ... But scale from these specific workloads is only valuable because these workloads are already profitable

                    I'm disputing this two-fold:

                    - Software tricks like batching and hardware like ASICs mean what is negative/neutral for a small or unoptimized provider is eventually positive for a large, optimized provider. You keep claiming they cannot do this with positive margin some reason, or only if already profitable, but those are unsubstantiated claims. Conversely, I'm giving classic engineering principles why they can keep driving down their COGS to flip to profitability as long as they have capital and scale. This isn't selling $1 for $0.90 because there is a long way to go before their COGS are primarily constrained by the price of electricity and sand. Instead of refuting this... You just keep positing that it's inherently negative margin.

                    In a world where inference consumption just keeps going up, they can keep pushing the technology advantage and creating even a slight positive margin goes far. This is the classic engineering variant of buttoning margins before an IPO: if they haven't yet, it's probably because they are intentionally prioritizing market share growth for engineering focus vs cost cutting.

                    - You are hyper fixated on tokens, and not that owning a large % of distribution lets them sell other things . Eg, instead of responding to my point 2 here, you are again talking about token margin. Apple doesn't have to make money on transistors when they have a 30% tax on most app spend in the US.

                    • lmeyerov 14 hours ago

                      Maybe this is the disconnect for the token side: you seem to think they can't keep improving the margin to reach profitability. They are static and it will just get worse, not better.

                      I think deepseek instead just showed they haven't really bothered yet. They rather focus on growing, and capital is cheap enough for these firms that optimizing margins is relatively distracting. Obviously they do optimize, but probably not at the expense of velocity and growth.

                      And if they do seriously want to tackle margins, they should pull a groq/Google and go aggressively deep. Ex: fab something. Which... They do indeed fund raise on.

                      • jsnell 13 hours ago

                        No, it feels more like the disconnect is that I think they're all compute-limited and you maybe don't? Almost every flop they use to serve a query at a loss is a flop they didn't use for training, research, or for queries that would have given them data to enable better training.

                        Like, yes, if somebody has 100k H100s and are only able to find a use for 10k of them, they'd better find some scale fast; and if that scale comes from increasing inference workloads by 10x, there's going to be efficiencies to be found. But I don't think anyone has an abundance of compute. If you've instead got 100k H100s but demand for 300k, you need to be making tradeoffs. I think loss-making paid inference is fairly obviously the worst way to allocate the compute, so I don't think anyone is doing it at scale.

                        > I think deepseek instead just showed they haven't really bothered yet.

                        I think they've all cared about aggressively optimizing for inference costs, though to varying levels of success. Even if they're still in a phase where they literally do not care about the P&L, cheaper costs are highly likely to also mean higher throughput. Getting more throughput from the same amount of hardware is valuable for all their use cases, so I can't see how it couldn't be a priority, even if the improved margins are just a side effect.

                        (This does seem like an odd argument for you to make, given you've so far been arguing that of course these companies are selling at a loss to get more scale so that they can get better margins.)

                        > - You are hyper fixated on tokens, and not that owning a large % of distribution lets them sell other things . Eg, instead of responding to my point 2 here, you are again talking about token margin. Apple doesn't have to make money on transistors when they have a 30% tax on most app spend in the US.

                        I did not engage with that argument because it seemed like a sidetrack from the topic at hand (which was very specifically the unit economics of inference). Expanding the scope will make convergence less likely, not more.

                        There's a very good reason all the labs are offering unmonetized consumer products despite losing a bundle on those products, but that reason has nothing at all to do with whether inference when it is being paid for is profitable or not. They're totally different products with different market dynamics. Yes, OpenAI owning the ChatGPT distribution channel is vastly valuable for them long-term, which is why they're prioritizing growth over monetization. That growth is going to be sticky in a way that APIs can't be.

                        Thanks, good discussion.

                        • lmeyerov 7 hours ago

                          I agree they are compute limited and disagree that they are aggressively optimizing. Many small teams are consistently showing many optimization gain opportunities all the way from app to software to hardware, and deepseek was basically just one especially notable example of many. In my experience, there are levels of effort to get corresponding levels of performance, and with complexity slowdowns on everyone else, so companies are typically slow-but-steady here, esp when ZIRP rewards that (which is still effectively in place for OpenAI). Afaict OpenAI hasn't been pounding on doors for performance people, and generally not signalling they go hard here vs growth.

                          Re: Stickiness => distribution leadership => monetization, I think they were like 80/20 on UI vs API revenue, but as a leader, their API revenue is still huge and still growing, esp as enterprise advance from POCs. They screwed up the API market for coding and some others (voice, video?), so afaict are more like "one of several market share leaders" vs "leading" . So the question becomes: Why are they able to maintain high numbers here, eg, is momentum enough so they can stay tied in second, and if they keep lowering costs, stay there, and enough so it can stay relevant for more vertical flows like coding? Does bundling UI in enterprise mean they stay a preferred enterprise partner? Etc . Oddly, I think they are at higher risk of losing the UI market more so than the API market bc an organizational DNA change is likely needed for how it is turning into a wide GSuite / Office scenario vs simple chat (see: Perplexity, Cursor, ...). They have the position, but it seems more straightforward for them to keep it in API vs UI.

                  • solarkraft 5 hours ago

                    > Engineering opportunities at volume and high skill allow changing the margin in ways low volume and low capitalization provider cannot

                    Everything depends on this actually being possible and I haven’t seen a lot of information on that so far.

                    DeepSeek‘s publication suggests that it is possible - specifically there was recently a discussion on batching. Google might have some secret sauce with their TPUs (is that why Gemini is so fast?)

                    And there are still Cerebras and Groq (why haven’t they taken over the world yet?), but their improvements don’t appear to be scale dependent.

                    Speculating that inference will get cheaper in the future might justify selling at a loss now to at least gain mind share, I guess.

              • what 17 hours ago

                There are more factors to cost than just the raw compute to provide inference. They can’t just fire everyone and continue to operate while paying just the compute cost. They also can’t stop training new models. The actual cost is much more than the compute for inference.

                • jsnell 17 hours ago

                  Yes, there are some additional operating costs, but they're really marginal compared to the cost of the compute. Your suggestion was personnel: Anthropic is reportedly on a run-rate of $3B with O(1k) employees, most of whom aren't directly doing ops. Likewise they also have to pay for non-compute infra, but it is a rounding error.

                  Training is a fixed cost, not a variable cost. My initial comment was on the unit economics, so fixed costs don't matter. But including the full training costs doesn't actually change the math that much as far as I can tell for any of the popular models. E.g. the alleged leaked OpenAI financials for 2024 projected $4B spent on inference, $3B on training. And the inference workloads are currently growing insanely fast, meaning the training gets amortized over a larger volume of inference (e.g. Google showed a graph of their inference volume at Google I/O -- 50x growth in a year, now at 480T tokens / month[0])

                  [0] https://blog.google/technology/ai/io-2025-keynote/

                  • ummonk 17 hours ago

                    That's all the more reason to run at a positive margin though - why shovel money into taking a loss on inference when you need to spend money on R&D?

                    • brookst 17 hours ago

                      I heart you.

                      Classic fixed / variable cost fallacy: if you look at the steel and plastic in a $200k Ferrari, it’s worth about $10k. They have 95% gross margins! Outrageous!

                      (Nevermind the engine R&D cost, the pre-production molds that fail, the testing and marketing and product placement and…)

                  • apothegm 18 hours ago

                    They probably have been running at negative margin, or at the very least started that way. But between hardware and software developments, their cost structures are undoubtedly improving over time —- otherwise we wouldn’t be seeing pricing drop with each new generation of models. In fact, I would bet that their margins are improving in spite of the price drops.

                  • brookst 17 hours ago

                    Citation needed.

                    Model providers spend a ton of money. It is unclear if they will ever have high margins. Today they are somewhere between zero and negative big numbers.

                    • tonyhart7 18 hours ago

                      subscription model is just there to serve B2C side of business which in turn them into B2B side

                      antrophic said themselves that enterprise is where the money at, but you cant just serve enterprise on the get go right

                      this is where the B2C indirect influence comes

                      • mvdtnz 18 hours ago

                        What evidence do you have that there's decent margin on the APIs?

                      • hshdhdhj4444 18 hours ago

                        Even to the extent that’s true, that doesn’t seem to be the issue here.

                        OpenAI is acquiring Windsurf which is its most direct competitor.

                        • selcuka 18 hours ago

                          True. Otherwise Anthropic would cut access to other code assistants too as they all compete with Claude Code.

                          • SoftTalker 17 hours ago

                            They might still. Why not?

                            Illustrates a risk of building a product with these AI coding tools. If your developers don't know how to build applications without using AI, then you're at the mercy of the AI companies. You might come to work one day and find that accidentally or deliberately or as the result of a merger or acquisition that the tools you use are suddenly gone.

                            • selcuka 14 hours ago

                              > If your developers don't know how to build applications without using AI, then you're at the mercy of the AI companies.

                              The same can be said if your developers don't know how to build applications:

                              - without using syntax highlighting ...

                              - without using autocomplete ...

                              - without using refactoring tools ...

                              - without using a debugger ...

                              Why do we not care about those? Because these are commodity features. LLMs are also a commodity now. Any company with a few GPUs and bandwidth can deploy the free DeepSeek or QwQ models and start competing with Anthropic/OpenAI. It may or may not be as good as Claude 4, but it won't be a catastrophe either.

                              • pbh101 17 hours ago

                                This is true of any SaaS vendor

                          • brookst 17 hours ago

                            I 100% agree with you except your framing makes it sound like the model providers are doing something wrong.

                            If I spend a ton of money money making the most amazing ceramic dinner plates ever and sell them to distributors for $10 each, and one distributor strikes gold in a market selling them at $100/plate, despite adding no value beyond distribution… hell yeah I’m cutting them off and selling direct.

                            I don’t really understand how it’s possible to see that in moral terms, let alone with the low-value partner somehow a victim.

                            • pbh101 17 hours ago

                              I don’t think it is at all clear that Windsurf adds zero value. Why do you think this is a helpful analogy?

                              • osigurdson 17 hours ago

                                The analogy is a bit like this. Imagine that there are 100 ceramic dinner plates for $6 each. Now someone comes in and buys them from you for $5 each - undercutting your margin. Then a 3rd company comes in and literally eats your lunch on your own ceramic dinner plates. The moral of the story is any story involving ceramic dinner plates is a good one, regardless of the utility of any analogy.

                            • liuliu 19 hours ago

                              Or AWS, and AWS managed services v.s. other managed services on top of AWS.

                            • icelancer 19 hours ago

                              This seems pretty reasonable to me. I don't really understand why Windsurf, owned by OpenAI (allegedly), should expect to have API access to their main competitor's API. People can still bring their own key and use whatever models they want in that way.

                              • hiddencost 16 hours ago

                                IDK, I pay someone money for a service, I would like the terms of our contract to protect me from getting cut off capriciously. The fact that Anthropic is selling a service without providing a contract that provides consumers with protections kinda sucks?

                                • hiddencost 16 hours ago

                                  Also, like, anti competitive behavior is kinda ... Illegal?

                                  • rafram 16 hours ago

                                    Not all anticompetitive practices are illegal, but this isn’t even anticompetitive.

                                    Anticompetitive practices are actions that reduce competitiveness in a market by entrenching your dominance over the (usually smaller) competition.

                                    Not allowing your competitor to buy your product arguably increases competition? It pushes them to improve their own product to be as good as yours.

                                    • threeseed 16 hours ago

                                      No it's not. It's only illegal if you are found guilty of it.

                                      And for that to happen you need to be (a) an effective monopoly, (b) have a negative direct or indirect impact on consumers, (c) large enough for regulators to care about and (d) be in a regulatory environment that priorities this enforcement.

                                  • BoorishBears 19 hours ago

                                    It doesn't make sense and the co-founder is being an intentionally crappy communicator.

                                    He's trying to paint it as removing access to something everyone gets by default, but in reality it's removing special treatment that they were previously giving Windsurf.

                                    The default Anthropic rate limit tiers don't support something Windsurf sized, and right now getting increased rate limits outside of their predefined tiers is an uphill battle because of how badly they're compute constrained.

                                    • consumer451 13 hours ago

                                      > Earlier this week, Windsurf said that Anthropic cut its direct access to Claude 3.5 Sonnet and Claude 3.7 Sonnet, two of the more popular AI models for coding, forcing the startup to find third-party computing providers on relatively short notice. Windsurf said it was disappointed in Anthropic’s decision and that it might cause short-term instability for users trying to access Claude via Windsurf.

                                      As a Windsurf user, maybe Sonnet 3.x models are slower right now, but they don't require BYOK like 4 does. So this is a bit of an exaggeration, isn't it? Anthropic did not cut them off, they seem to have continued honoring existing quota on previous agreements.

                                      What did Windsurf think was going to happen with this particular exit? Also, how embarrassing is this for OpenAI that it's even a big deal?

                                      • BoorishBears 10 minutes ago

                                        They could have moved to Bedrock or Vertex, which both resell Claude access and have autonomy in who they allocate higher rate limits to.

                                        But they're also crunched on Claude 4 capacity, if not more so than Anthropic's first-party access

                                      • undefined 17 hours ago
                                        [deleted]
                                      • tuyguntn 19 hours ago

                                        This seems odd to me, I don't expect bakery to reject selling me a bread this morning, because I started working in another bakery nearby.

                                        • charlie0 19 hours ago

                                          No, but if YOU opened up a bakery and were packing their bread and re-selling it, they might not like that. This happened before and it's not great.

                                          https://www.businessinsider.com/restaurant-accused-reselling...

                                          • freehorse 18 hours ago

                                            This is more like about cutting you off reselling their bread, which would be reasonable.

                                            • tuyguntn 11 hours ago

                                              By this logic NVIDIA should cut off Claude, because Claude is reselling their GPU hours, when NVIDIA itself can do it.

                                              IMHO, there are 3 types of products in current LLM space (excluding hardware):

                                                 * Model makers - OpenAI, Claude, Gemini
                                                 * Infra makers - Groq, together.ai and etc
                                                 * Product makers - Cursor, Windsurf and others
                                              
                                              If Level 1 can block Level 3 this easily, that's a problem for industry in my book. Because there will be no trust between different types of companies, when there is no trust, some companies become monopoly with a bad behavior, bad for customers/users
                                              • zmgsabst 15 hours ago

                                                Is it?

                                                Grocery and department stores routinely have brands that compete with those they resell — but they’re not cut off for that. Eg, Kroger operates its own bakery and resells bread.

                                                What makes technology unlike those?

                                                • smileeeee 8 hours ago

                                                  Stores regularly sell boxes of things labelled "not for individual resale". Here, windsurf would be the shopkeeper selling individual chewing gum strips out of the big package of 10x 5 strips.

                                          • wincy 18 hours ago

                                            Wow excited for when Anthropic buys Cursor in a couple months and I get locked out of using OpenAI models with it. This is depressing. I just want stuff to work.

                                            • paxys 17 hours ago

                                              If LLMs were a sustainable business then Anthropic would have no problem selling Claude to a competitor. Heck they'd brag about it ("see OpenAI uses our models to code instead of their own!"). You see this in the industry all the time. Large tech companies compete with each other and sue each other and have deep business relationships at the same time.

                                              What this change really says is that Anthropic doesn't want to burn VC money to help a competitor. And that is the reality of "I just want stuff to work". It won't just work because there's no stable business underneath it. Until that business can be found things will keep changing and get shittier.

                                              • demosthanos 17 hours ago

                                                Does anyone here actually use OpenAI models in Cursor? I kind of forgot they were even there. I've just been alternating between Claude and Gemini, and the sense I've had in online discussions is that that's pretty normal.

                                                • brookst 17 hours ago

                                                  Friendly advice: you will be happier if you defer reacting to things until they are actually real.

                                                  We can all imagine all sorts or terrible futures. Many of us do. But there is no upside in being prematurely outraged.

                                                  • 34679 17 hours ago

                                                    Void is basically the same thing, but open source and better. It's easy to use with any provider API key, even LM Studio for local models. You can use it with free models available from OpenRouter to try it out, but the quality of output is just as dependent on the model as any other solution.

                                                    https://voideditor.com/

                                                    https://github.com/voideditor/void

                                                    • zwarag 11 hours ago

                                                      People like Windsurf and Cursor because they offer them a flatrate.

                                                      • 34679 9 hours ago

                                                        Yeah, for sure. That's why I tried Cursor for a month. But as soon as I ran out of fast requests it became unusable. It had me waiting several minutes for the first token. I didn't realize how bad the experience was, fast requests included, until I used Void. It makes Cursor fast requests seem slow, and I tend to need fewer of them. The differences being that Void requests go straight to the provider instead of their own server first, and their system prompts seem to keep the models more on task.

                                                        • demosthanos 7 hours ago

                                                          How much do you spend per month on Void, though? Your testimonial is great but incomplete without that information.

                                                    • OsrsNeedsf2P 18 hours ago

                                                      This is why I use Aider. Open source or GTFO

                                                      • TiredOfLife 6 hours ago

                                                        And then Anthropic cuts api access to Aider

                                                      • selcuka 18 hours ago

                                                        Anthropic models are slightly better for coding tasks anyway. I believe Windsurf is in more trouble.

                                                        • deadbabe 18 hours ago

                                                          Aren’t the OpenAI models worse than the alternatives?

                                                          • dmix 18 hours ago

                                                            The difference is likely very marginal in practice for what most people are doing. o3 and 4o do programming just fine.

                                                            • jerpint 18 hours ago

                                                              That may be true today but not in a few weeks, ideally you have access to all models whenever

                                                            • solumunus 12 hours ago

                                                              Why would they when Claude Code is night and day better? It makes Cursor look like a joke in my experience.

                                                            • nickthegreek 19 hours ago

                                                              Now your company needs to worry about building workflows on the back of these neat AI companies, they got bought out and cause a disruption that can reverberate through the org in unknown ways for unknown amounts of time. This and the openai no deletion court order should be a wake up call on how these technologies get implemented and the trust we cede to them.

                                                              • sebmellen 19 hours ago

                                                                It’s no different from the shell games that have been going on in enterprise software for the past 30 years.

                                                                You can choose independence if you’re willing to use a slightly worse open weight model.

                                                              • widdakay 19 hours ago

                                                                These are two different products. It's like SpaceX launching satellites for competitive satellite internet services. They didn't care that they were providing launch capabilities for a competitor and neither should Anthropic. What if Apple stopped allowing you to use an iPhone if you worked at Google?

                                                                • danpalmer 12 hours ago

                                                                  Another way this isn't comparable is that the training data is critical for the services each provides. Seeing answers Claude gives (at scale, for free) would be a huge competitive advantage to OpenAI, whereas Apple Mail seeing the email you send via Gmail wouldn't convey any competitive advantage in email, for example.

                                                                  • tonyhart7 18 hours ago

                                                                    "They didn't care that they were providing launch capabilities for a competitor"

                                                                    Yeah because another internet provider did not have SpaceX reusable rocket technology

                                                                    its not really quite the same you know

                                                                    • demosthanos 17 hours ago

                                                                      > What if Apple stopped allowing you to use an iPhone if you worked at Google?

                                                                      This wouldn't be remotely comparable. This is targeting of a competitor's employees, not targeting a competitor's subsidiaries.

                                                                      If you want to go the Apple-Google route a better comparison would be that this is like Apple refusing to allow you to hook up an Air Tag on an Android phone. Which is something that they do, in fact, do.

                                                                      • undefined 17 hours ago
                                                                        [deleted]
                                                                      • thrdbndndn 19 hours ago

                                                                        I think the nature of your two examples, along with the one from the news, is too different from each other for the analogy to really hold. These situations can only be judged on a case-by-case basis.

                                                                        • paxys 17 hours ago

                                                                          If SpaceX was launching every rocket at a loss then they would absolutely care about a competitor taking advantage of it.

                                                                          • killerstorm 18 hours ago

                                                                            Do you understand that data can be used for training?

                                                                          • undefined 13 hours ago
                                                                            [deleted]
                                                                            • k__ 18 hours ago

                                                                              When I read about Cursor and Windsurf, I'm quite happy that I only use Aider. An open source project, not associated with anyone else...

                                                                              • bravesoul2 17 hours ago

                                                                                Curser and Windserf

                                                                                Yeah better to go with open technologies. Maybe use Groq for inference knowing you can switch over later if needed as you are using Llama or Deepseek.

                                                                              • pton_xd 19 hours ago

                                                                                I'm sure this all fits neatly into their EA "maximizing positive outcomes for humanity" world-view somehow.

                                                                                • CyberMacGyver 18 hours ago

                                                                                  It could be argued from Antrhopics perspective that OpenAI and Worldcoin are not positive for humanity so this in fact necessary

                                                                                  • dmix 18 hours ago

                                                                                    I don't think EA people are pro going out of business so you don't have money.

                                                                                    • bravesoul2 17 hours ago

                                                                                      Can't save humanity from the heat death of the universe without a castle.

                                                                                  • arnaudsm 17 hours ago

                                                                                    We are about to enter the era of aggressive LLM monetisation and anti-competitivness. We got used to cheap subsidized models, in bet in 2 years we'll pay double for the same service.

                                                                                    • jmward01 17 hours ago

                                                                                      Decisions like this may shoot them in the foot later as opensource and cheaper compute continues to push into frontier model territory. I know I have no loyalty to the big providers and would take a minor quality/cost hit to jump off. Right now it isn't a minor quality / cost hit though. Knowing that they can cut you off if they think you are going to end up competing with them makes me tolerate an even bigger gap.

                                                                                      • undefined 19 hours ago
                                                                                        [deleted]
                                                                                        • charlie0 19 hours ago
                                                                                          • dboreham 19 hours ago

                                                                                            Probably most here are not old enough to remember, but there was a time when Google had all sorts of data access APIs and encouraged developers to create applications that used said APIs. And then they disabled them all.

                                                                                            • bitpush 19 hours ago

                                                                                              How is that relevant?

                                                                                              • 8note 18 hours ago

                                                                                                api openness is temporary - dont build a business on it

                                                                                            • HyprMusic 18 hours ago

                                                                                              This feels like a cheap trick to drive users towards Claude Code. It's likely no coincidence that this happened at the same time they announced subscription access to Claude Code.

                                                                                              The Windsurf team repeatedly stated that they're running at a loss so all this seems to have achieved is giving OpenAI an excuse to cut their 3rd party costs and drive more Windsurf users towards their own models.

                                                                                              • artdigital 18 hours ago

                                                                                                > This feels like a cheap trick to drive users towards Claude Code

                                                                                                How did you come to this conclusion? It’s very much like he remarked: OpenAI acquired Windsurf, OpenAI is Anthropics direct competitor.

                                                                                                It doesn’t make strategic sense to sell Claude to OpenAI. OpenAI could train against Claude weights, or OpenAI can cut out Anthropic at any moment to push their own models.

                                                                                                The partnership isn’t long lasting so it doesn’t make sense to continue it.

                                                                                                • selcuka 17 hours ago

                                                                                                  > OpenAI could train against Claude weights

                                                                                                  OpenAI can always buy a Claude API subscription with a credit card if they want to train something. This change only prevents the Windsurf product from offering Claude APIs to their customers.

                                                                                                  • brookst 17 hours ago

                                                                                                    Other than, you know, terms and contracts.

                                                                                                    • selcuka 14 hours ago

                                                                                                      But that's a completely different story, no? Cutting off Windsurf has nothing to do with enforcing that T&C.

                                                                                                      • undefined 14 hours ago
                                                                                                        [deleted]
                                                                                                  • huxley 17 hours ago

                                                                                                    Totally irrelevant, Anthropic isn’t cutting off OpenAI, it is cutting off Windsurf users.

                                                                                                    • artdigital 17 hours ago

                                                                                                      Windsurf users can still plug their own Anthropic key and continue using the models. It’s Windsurf subscribers (eg OpenAI customers) that use the models through the Windsurf service (through their servers as proxy, that’s now OpenAI) are getting cut off

                                                                                                      I don’t see how this is irrelevant. Windsurf is a first party product of their most direct competitor. Imagine a car company integrating the cloud tech of a different manufacturer

                                                                                                      • HyprMusic 10 hours ago

                                                                                                        Exactly, it's like car manufacturers cutting off Android because Google own Waymo. The only people that pay are the consumers.

                                                                                                  • ramesh31 18 hours ago

                                                                                                    Nobody is actually using Windsurf. It was an acquihire and a squashing of competition that caught ground in the enterprise contract market really early. Anyone doing agentic coding seriously is using open source tooling with direct token pricing to the major model providers. Windsurf/Cursor/et. al are just expensive middlemen with no added value.

                                                                                                    • peterhadlaw 18 hours ago

                                                                                                      Which open source agentic tooling are you using. I'm a fan of Aider but I find it lacking the agentic side of things. I've looked at Goose, Plandex, Opencode, and etc. Which do you like?

                                                                                                      • ramesh31 17 hours ago

                                                                                                        Cline all the way: https://cline.bot/.

                                                                                                        Haven't found anything else that even comes close.

                                                                                                        • peterhadlaw 17 hours ago

                                                                                                          Dang, was hoping for something terminal based <3 but thank you

                                                                                                      • undefined 18 hours ago
                                                                                                        [deleted]
                                                                                                        • TiredOfLife 6 hours ago

                                                                                                          If nobody is using then why cut off access?

                                                                                                      • undefined 19 hours ago
                                                                                                        [deleted]
                                                                                                        • tuyguntn 19 hours ago

                                                                                                          I don't know what kind of agreement they had, or any agreement at all, but with this move, Anthropic is showing itself as an unreliable provider.

                                                                                                          It's same as: We can cut access anytime, "I think it would be odd for us to be selling Claude to <YOUR_COMPANY>"

                                                                                                          • undefined 20 hours ago
                                                                                                            [deleted]
                                                                                                            • undefined 20 hours ago
                                                                                                              [deleted]
                                                                                                              • bravesoul2 18 hours ago

                                                                                                                Antitrust?

                                                                                                                Maybe GitHub and Microsoft should kick out all competing company 3rd party integrations.

                                                                                                                See where this leads...

                                                                                                                • mountainriver 18 hours ago

                                                                                                                  This should be a clear signal to the community that anthropic can’t be trusted. OpenAI can’t either, TBD on google

                                                                                                                  • alehlopeh 18 hours ago

                                                                                                                    TBD on whether Google can be trusted? That ship sailed long ago.

                                                                                                                    • mountainriver 18 hours ago

                                                                                                                      To steal markets? Honestly I can’t think of one but someone can correct me

                                                                                                                      I certainly don’t trust them to not kill whole products on demand