• simonw 2 days ago

    This price drop is significant. For <128,000 tokens they're dropping from $3.50/million to $1.25/million, and output from $10.50/million to $2.50/million.

    For comparison, GPT-4o is currently $5/million input and $15/million output and Claude 3.5 Sonnet is $3/million input and $15/million output.

    Gemini 1.5 Pro was already the cheapest of the frontier models and now it's even cheaper.

    • pzo 2 days ago

      whats confusing they have different pricing for output. Here [1] it's $5/million output (starting 1st october) and on vertex AI [2] it's $2.5/1 million (starting 7 october) - but characters - so it's overall gonna be more expensive if you wanna compare to equivalent 1 million tokens. It's actually even more confusing to know what kind of characters they mean? 1 byte? UTF-8?

      [1] https://ai.google.dev/pricing

      [2] https://cloud.google.com/vertex-ai/generative-ai/pricing

      • Deathmax 2 days ago

        They do mention how characters are counted in the Vertex AI pricing docs: "Characters are counted by UTF-8 code points and white space is excluded from the count"

      • RhodesianHunter 2 days ago

        I wonder if they're pulling the wall-mart model. Ruthlessly cut costs and sell at-or-below costs until your competitors go out of business, then ratchet up the prices once you have market dominance.

        • lacker 2 days ago

          Probably not. Do they really believe they are going to knock OpenAI out of business, when the OpenAI models are better?

          Instead I think they are going after the "Android model". Recognize they might not be able to dethrone the leader who invented the space. Define yourself in the marketplace as the cheaper alternative. "Less good but almost as good." In the end, they hope to be one of a small number of surviving members of an valuable oligopoly.

          • scarmig 2 days ago

            Cheapness has a quality all its own.

            Gemini is substantially cheaper to run (in consumer prices, and likely internally as well) than OpenAI's models. You might wonder, what's the value in this, if the model isn't leading? But cheaper inference could potentially be a killer edge when you can scale test-time compute for reasoning. Scaling test-time compute is, after all, what makes o1 so powerful. And this new Gemini doesn't expose that capability at all to the user, so it's comparing apples and oranges anyway.

            DeepMind researchers have never been primarily about LLMs, but RL. If DM's (and OAI's) theory is correct--that you can use test-time compute to generate better results, and train on that--this is potentially a substantial edge for Google.

            • zaptrem 2 days ago

              Google still has an unbelievable training infrastructure advantage. The second they can figure out how to convert that directly to model performance without worrying about data (as the o1 blog post seemed to imply OAI had) they’ll be kings.

              • llelouch 2 days ago

                This is why Sam Altman keeps releasing things a few days before Deepmind. He is worried Google will overtake them more so than other companies.

              • Dr4kn 2 days ago

                In Home Assistant you can use LLMs to control your Home with your voice. Gemini performs similar to the GPT models, and with the cost difference there is little reason to choose OpenAi

                • a_wild_dandan 2 days ago

                  Using either frontier model for basic edge device problems is wasteful. Use something cheap. We're asking "is there a profitable niche between the best & runner-up models?" I believe so.

              • GaggiX 2 days ago

                Android is more popular than iOS by a large margin and it's neither less good or cheaper, it really depends on the smartphone.

                • undefined 2 days ago
                  [deleted]
                • socksy 2 days ago

                  The latest Google Pixel phone (you know, the one that Google actually set the price for) appears to cost the exact same as the latest iPhone ($999 for pro, $799 for non-pro). And I would argue against the "less good" bit too.

                  I think this analysis is not in keeping with reality, and I doubt if that's their strategy.

                  • rajup 2 days ago

                    I doubt anyone buys Pixel phones at full price. They are discounted almost right out of the gate.

                  • JeremyNT 2 days ago

                    > Probably not. Do they really believe they are going to knock OpenAI out of business, when the OpenAI models are better?

                    Would OpenAI even exist without Google publishing their research? The idea that Google is some kind of also-ran playing catch up here feels kind of wrong to me.

                    Sure OpenAI gave us the first productized chatbots, so in that sense they "invented the space," but it's not like Google were over there twiddling their thumbs - they just weren't exposing their models directly outside of Google.

                    I think we're past the point where any of these tech giants have some kind of moat (other than hardware, but you have to assume that Google is at least at parity with OpenAI/MS there).

                  • bko 2 days ago

                    Isn't Walmart still incredibly cheap? They have a net margin of 2.3%

                    I think that's one of those things competitors complain about that never actually happens (the raising prices part).

                    https://www.macrotrends.net/stocks/charts/WMT/walmart/net-pr...

                    • sangnoir 2 days ago

                      There's lot of room to cut margins in the AI stack right now (see Nvidia's latest report); low prices are not an sure indication of predatory pricing. Which company do you think is most likely to have the lowest training and inference costs between Anthropic, OpenAI and Google? My bet goes to the one designing,producing and using their own TPUs.

                      • charlie0 2 days ago

                        Yes, and it's the exact same thing OpenAI/Microsoft and Facebook are doing. In Facebook's case, they are giving it away for free.

                        • entropicdrifter 2 days ago

                          You think Google would engage in monopolistic practices like that?

                          Because I do

                          • 0cf8612b2e1e 2 days ago

                            I have no idea if this is dumping or not. At Microsoft/Google scale, what does it cost to serve a million LLM tokens?

                            Tough to disentangle the capex vs opex costs for them. If they did not have so many other revenue streams, potentially dicey as there are probably still many untapped performance optimizations.

                        • GaggiX 2 days ago

                          GPT-4o is 2.5/10$. Unless you look at an old checkpoint. GPT-4o was the cheapest frontier model before.

                          • simonw 2 days ago

                            I can’t see that price on https://openai.com/api/pricing/ - it’s listing $5/m input and $15/m output for GPT-4o right now.

                            No wait, correction: That’s confusing: it lists 4o first and then lists gpt-4o-2024-08-06 as $2.50/$10.

                            • jeffharris 2 days ago

                              apologies: it's taken us a minute to switch the default `gpt-4o` pointer to the newest snapshot

                              we're planning on doing that default change next week (October 2nd). And you can get the lower prices now (and the structured outputs feature) by manually specify `gpt-4o-2024-08-06`

                              • jiggawatts 2 days ago

                                > “You can”

                                No, “I” can’t.

                                Open AI has always trickled out model access, putting their customers into “tiers” of access. I’m not sufficiently blessed by the great Sam to have immediate access.

                                On, and Azure Open AI especially likes to drag their feet both consistently, and also on a per-region basis.

                                I live in a “no model for you” region.

                                Open AI says: “Wait your turn, peasant” while claiming to be about democratising access.

                                Google and everyone else just gives access, no gatekeeping.

                                • benterix 2 days ago

                                  > Google and everyone else just gives access, no gatekeeping.

                                  Well, Gemini Pro was delayed in Europe for many months. Same for Claude.

                                  • jiggawatts a day ago

                                    For legal/regulatory reasons, not for the arbitrary favouritism reasons of OpenAI.

                          • lossolo 2 days ago

                            > For comparison, GPT-4o is currently $5/million input and $15/million output and Claude 3.5 Sonnet is $3/million input and $15/million output.

                            Google is the only one of the three that has its own data centers and custom inference hardware (TPU).

                            • ekkk a day ago

                              It doesn't matter if it's cheap, it's unusable.

                            • naiv 2 days ago

                              This sounds interesting:

                              "We will continue to offer a suite of safety filters that developers may apply to Google’s models. For the models released today, the filters will not be applied by default so that developers can determine the configuration best suited for their use case."

                              • ancorevard 2 days ago

                                This is the most important update.

                                Pricing and speed doesn't matter when your call fails because of "safety".

                                • CSMastermind 2 days ago

                                  Also Google's safety filters are absolutely awful. Beyond parody levels of bad.

                                  This is a query I did recently that got rejected for "safety" reasons:

                                  Who are the current NFL starting QBs?

                                  Controversial I know, I'm surprised I'd be willing to take the risk with submitting such a dangerous query to the model.

                                  • nrvn a day ago

                                    You know what. I just ran this query in gemini and it spit out “I'm not able to help with that, as I'm only a language model.” but just before that I got a glimpse of a real answer:

                                    NFL Starting Quarterbacks for the 2024 Season Note: Quarterback situations can change throughout the season due to injuries, trades, or poor performance. AFC • Baltimore Ravens: Lamar Jackson Buffalo Bills: Josh Allen

                                    But then it gets wiped and you cannot see it even in the drafts. The text above is from the screenshot I managed to make before the response vanished.

                                    This non-deterministic unpredictable behavior blended with poor “safety” policies is one of those major “dealbreakers” that pushes me back from trusting any existing LLMs.

                                    • elashri 2 days ago

                                      Not stranger than my experience with openai. I got banned from DELL-3 access when it first came because I asked in the prompt about generating a particle moving in magnetic field of a forward direction and decays to two other particles with a kink angle between the particle and the charged daughter.

                                      I don't recall exact prompt but it should be something close to that. I really wonder what filters they had about kink tracks and why? Do they have a problem with Beyond standard model searches /s.

                                      • CSMastermind 2 days ago

                                        For what it's worth I run every query I make through all the major models and Google's censorship is the only one I consistently hit.

                                        I think I bumped into Anthropics once? And I know I hit ChatGPTs a few months back but I don't even remember what the issue was.

                                        I hit Google's safety blocks at least a few times a week during the course of my regular work. It's actually crazy to me that they allowed someone to ship these restrictions.

                                        They must either think they will win the market no matter the product quality or just not care about winning it.

                                        • ancorevard 7 hours ago

                                          Microsoft's OpenAI models seems even worse. They run their own "safety" filter.

                                          Using their models in a medical setting is impossible. It refuses to describe scientific photos and will not summarize HCP discussions (texts) that it misinterprets.

                                        • bobwell 2 days ago

                                          I'm reminded of this short story about a government bureaucracy banning research in certain areas to prevent dangerous technology being discovered: https://en.wikipedia.org/wiki/The_Dead_Past

                                    • KaoruAoiShiho 2 days ago

                                      There's still basic filters even if you take all the ones that you can turn off from the UI all off. It's still not capable of summarizing some YA novels I tried to feed it because of those filters.

                                      • panarky 2 days ago

                                        The "safety" filters used to make Gemini models nearly unusable.

                                        For example, this prompt was apparently unsafe: "Summarize the conclusions of reputable econometric models that estimate the portion of import tariffs that are absorbed by the exporting nation or company, and what portion of import tariffs are passed through to the importing company or consumers in the importing nation. Distinguish between industrial commodities like steel and concrete from consumer products like apparel and electronics. Based on the evidence, estimate the portion of tariffs passed through to the importing company or nation for each type of product."

                                        I can confirm that this prompt is no longer being filtered which is a huge win given these new lower token prices!

                                      • FergusArgyll 2 days ago

                                        Any opinions on pro-002 vs pro-exp-0827 ?

                                        Unlike others here I really appreciate the gemini API, it's free and it works. I haven't done too many complicated things with it but I made a chatbot for the terminal, a forecasting agent (for metaculus challenge) and a yt-dlp auto namer of songs. The point for me isn't really how it compares to openAI/anthropic, it's a free API key and I wouldn't have made the above if I had to pay just to play around

                                        • bn-l 2 days ago

                                          I’ve used it. The API is incredibly buggy and flakey. A particular pain point is the “recitation error” fiasco. If you’re developing a real world app this basically makes the Gemini api unusable. It strikes me as a kind of “Potemkin” service.

                                          Google is aware of the issue and it has been open on google's bug tracker since March 2024: https://issuetracker.google.com/issues/331677495

                                          There is also discussion on GitHub: https://github.com/google-gemini/generative-ai-js/issues/138

                                          It stems from something google added intentionally to prevent copyright material being returned verbatim (ala the NYT openai fiasco), so they dialled up the "recitation" control (the act of repeating training data—and maybe data they should not have legally trained on).

                                          Here are some quotes from the bug tracker page:

                                          > I got this error by just asking "Who is Google?"

                                          > We're encountering recitation errors even with basic tutorials on application development. When bootstrapping a Spring Boot app, we're flagged for the pom.xml being too similar to some blog posts.

                                          > This error is a deal breaker... It occurs hundreds of times a day for our users and massively degrades their UX.

                                          • screye 2 days ago

                                            The recitation error is a big deal.

                                            I was ready to champion gemini use across my organization, and the recitation issue curbed any enthusiasm I had. It's opaque and Google has yet to suggest a mitigation.

                                            Your comment is not hyperbole. It's a genuine expression of how angry many customers are.

                                            • 3choff a day ago

                                              It seems the recitation problem has been fixed with the latest models. In my tests, the answer generation no longer stops prematurely. Before I resume a project that was on hold due to this issue, I'm gathering feedback from other users about their experiences with this and the new models. Are you still having this problem?

                                              • undefined 2 days ago
                                                [deleted]
                                              • summerlight 2 days ago

                                                Looks like they are more focused on the economical aspect of those large models? Like 90~95% performance of other frontier models at 50%~70% price.

                                                • Workaccount2 2 days ago

                                                  They are going for large corporate customers. They are a brand name with deep pockets and a pretty risk-adverse model.

                                                  So even if Gemini sucks, they'll still win over execs being pushed to make a decision.

                                                  • hadlock 2 days ago

                                                    Not even trying to be snarky, but their lack of ability to offer products for more than a handful of years, does not lend google towards being chosen by large corporate customers. I know a guy who works in cloud sales and his government customers are PISSED they are sunsetting one of their PDF products and are being forced to migrate that process. The customer was expecting that to work for 10+ years and after a ~3 year onboarding process, they have 6 months to migrate. If my neck was on the line after buying the google PDF product, I wouldn't even short list them for an AI product.

                                                    • jsnell 2 days ago

                                                      What Google Cloud pdf product is that? I thought my knowledge of discontinued Google products was near-encyclopedic, but this is the first I've heard of that.

                                                      But as an enterprise customer, if you expect X, don't you get X into the contract?

                                                    • RhodesianHunter 2 days ago

                                                      That doesn't seem like much of a plan given their trailing position in the cloud space and the fact that Microsoft and AWS both have their own offerings.

                                                      • resters 2 days ago

                                                        Maybe Google is holding back it's far superior, truly sentient AI until other companies have broken the ice. Not long ago there was a Google AI engineer who rage quit over Google's treatment of sentienet AI.

                                                    • TIPSIO 2 days ago

                                                      I do like the trend.

                                                      Imagine if Anthropic or someone eventually release a Claude 3.5 but at like a whopping 10x its current speed.

                                                      Would be incredibly more useful and game changing than a slow o1 model that may or not be x percent smarter.

                                                      • bangaladore 2 days ago

                                                        Sonnet 3.5 is fast for its quality. But yeah, it's nowhere near Google's flash models. But I assume that is largely just because its a much smaller model.

                                                        • sgt101 2 days ago

                                                          We might see that with the inference ASICs later this year I guess?

                                                          • xendipity 2 days ago

                                                            Ooh, what are these ASICs you're talking about? My understanding was that we'll see AMD/Nvidia gpus continue to be pushed and very competitive as well as have new system architectures like cerebras or grok. I haven't heard about new compute platforms framed as ASICs.

                                                    • anotherpaulg 2 days ago

                                                      The new Gemini models perform basically the same as the previous versions on aider's code editing benchmark. The differences seem within the margin of error.

                                                      https://aider.chat/docs/leaderboards/

                                                      • kendallchuang 2 days ago

                                                        Has anyone used Gemini Code Assist? I'm curious how it compares with Github Copilot and Cursor.

                                                        • mil22 2 days ago

                                                          I have used Github Copilot extensively within VS Code for several months. The autocomplete - fast and often surprisingly accurate - is very useful. My only complaint is when writing comments, I find the completions distracting to my thought process.

                                                          I tried Gemini Code Assist and it was so bad by comparison that I turned it off within literally minutes. Too slow and inaccurate.

                                                          I also tried Codestral via the Continue extension and found it also to be slower and less useful than Copilot.

                                                          So I still haven't found anything better for completion than Copilot. I find long completions, e.g. writing complete functions, less useful in general, and get the most benefit from short, fast, accurate completions that save me typing, without trying to go too far in terms of predicting what I'm going to write next. Fast is the key - I'm a 185 wpm on Monkeytype, so the completion had better be super low latency otherwise I'll already have typed what I want by the time the suggestion appears. Copilot wins on the speed front by far.

                                                          I've also tried pretty much everything out there for writing algorithms and doing larger code refactorings, and answering questions, and find myself using Continue with Claude Sonnet, or just Sonnet or o1-preview via their native web interfaces, most of the time.

                                                          • kendallchuang 2 days ago

                                                            I see, perhaps with Gemini because the model is larger it takes longer to generate the completions. I would expect with a larger model it would perform better on larger codebases. It sounds like for you, it's faster to work on a smaller model with shorter more accurate completions rather than letting the model guess what you're trying to write.

                                                            • imp0cat 2 days ago

                                                              Have you tried Gitlab Duo and if so, what are your thoughts on that?

                                                              • mil22 2 days ago

                                                                Not yet, hadn't heard of it. Thanks for the suggestion.

                                                            • spiralk 2 days ago

                                                              The Aider leaderboards seem like a good practical test of coding usefulness: https://aider.chat/docs/leaderboards/. I haven't tried Cursor personally but I am finding Aider with Sonnet more useful that Github Copilot and its nice to be able to pick any model API. Eventually even a local model may be viable. This new Gemini model does not rank very high unfortunately.

                                                              • kendallchuang 2 days ago

                                                                Thanks for the link. That's unfortunate, though perhaps the benchmarks will be updated after this latest Gemini release. Cursor with Sonnet is great, I'll have to give Aider a try as well.

                                                                • KaoruAoiShiho 2 days ago

                                                                  It's on the leaderboard, it's tied with qwen 2.5 72b and far below SOTA of o1, claude sonnet, and deepseek. (also below very old models like gpt-4-0314 lol)

                                                                  • spiralk 2 days ago

                                                                    It is updated actually, gemini-1.5-pro-002 is this new model.

                                                                    • kendallchuang 2 days ago

                                                                      That was fast, I missed it!

                                                                • therein 2 days ago

                                                                  I know you aren't necessarily talking about in-editor code assist but something about in-editor AI cloud code assist makes me super uncomfortable.

                                                                  It makes sense I need to be careful not to commit secrets to public repositories but now I have to avoid not only saving credentials into a file but even to paste them by accident into my editor?

                                                                  • danmaz74 2 days ago

                                                                    I tried it briefly and didn't like it. On the other hand, I found Gemini pro better than sonnet or 4o at some more complex coding tasks (using continue.dev)

                                                                    • dudus 2 days ago

                                                                      I use it and find it very helpful. Never tried cursor or copilot though

                                                                      • therein 2 days ago

                                                                        I tried Cursor the other day. It was actually pretty cool. My thought was, I'll open this open source project and use it to grok my way around the codebase. It was very helpful. After that I accidentally pasted an API secret into the document; had to consider it compromised and re-issued the credential.

                                                                      • spotlmnop 2 days ago

                                                                        God, it sucks

                                                                      • frankdenbow 2 days ago
                                                                        • hadlock 2 days ago

                                                                          They put a 42 minute video on twitter? That's brave.

                                                                        • throwup238 2 days ago

                                                                          > What makes the new model so unique?"

                                                                          >> Yeah It's a good question. I think it's maybe less so of what makes it unique and more so the general trajectory of the trend that we're on.*

                                                                          Disappointing.

                                                                        • serjester 2 days ago

                                                                          Gemini feels like an abusive relationship — every few months, they announce something exciting, and I’m hopeful that this time will be different, that they’ve finally changed for the better, but every time, I’m left regretting having spent any time with them.

                                                                          Their docs are awful, they have multiple unusable SDK's and the API is flaky.

                                                                          For example, I started bumping into "Recitation" errors - ie they issue a flat out refusal if your response resembles anything in the training data. There's a GitHub issue with hundreds of upvotes and they still haven't published formal guidance on preventing this. Good luck trying to use the 1M context window.

                                                                          Everything is built the "Google" way. It's genuinely unusable unless you're a total masochist and want to completely lock yourself into the Google ecosystem.

                                                                          The only thing they can compete on is price.

                                                                          • jatins 2 days ago

                                                                            i think it's unusable if you are trying to use via GCP. Using via ai studio is a decent experience

                                                                            • golergka 2 days ago

                                                                              [flagged]

                                                                              • undefined 2 days ago
                                                                                [deleted]
                                                                            • stan_kirdey 2 days ago

                                                                              Google should just offer llama3 405b, maybe slightly fine tuned. Geminis are unusable.

                                                                              • romland 2 days ago

                                                                                Pretty sure Google's got more than 700 million active users.

                                                                                In fact, Google is most likely _the_ target for that clause in the license.

                                                                                • Deathmax 2 days ago
                                                                                  • tiborsaas 2 days ago

                                                                                    AI companies should not pick up naming models after astrological signs, after a while it will be hard to tell apart model reviews from horoscope.

                                                                                    • VirusNewbie 2 days ago

                                                                                      Llama seems to be tricked up by simple puzzles that Gemini does not struggle with, in my experience.

                                                                                      • nmfisher 2 days ago

                                                                                        I've found Gemini Pro to be the most reliable for function calling.

                                                                                        • dcchambers 2 days ago

                                                                                          > Geminis are unusable

                                                                                          how so?

                                                                                          • JimDabell 2 days ago

                                                                                            In my experience, Gemini models are far worse than any other frontier model when it comes to hallucinations. They are also pretty bad at getting caught in loops where pointing out a mistake makes it flap between two broken solutions. And obviously the overzealous softly stuff that other people have mentioned.

                                                                                            • ekkk a day ago

                                                                                              Yep, idk why would anyone use gemini instead of chatgpt or claude.

                                                                                              • thekevan a day ago

                                                                                                I have found that as well

                                                                                          • rkwasny 2 days ago

                                                                                            Can someone explain to me why there is COMPLETELY different pricing for models on Vertex AI, Google AI studio and also OpenRouter has another price ...

                                                                                          • charlie0 2 days ago

                                                                                            Anyone want to take bets on how long it takes for this to hit the Google Graveyard?

                                                                                            • mnicky a day ago

                                                                                              I think the sentiment here is not fully objective, there are nice improvements in benchmarks (and even more so when accounting for the price): https://imgur.com/a/K3tVPEw

                                                                                              Also, this model shouldn't be compared to the CoT o1, I think. That is something different (also in price and speed).

                                                                                              • ramshanker 2 days ago

                                                                                                One company buying expensive NVIDIA hardware vs another using in-house chips. Google got a huge advantage here. They could really undercut OpenAI.

                                                                                                • rty32 2 days ago

                                                                                                  People have said that for many years. Very few companies are choosing Google's TPUs. Everyone wants H100s.

                                                                                                • TheAceOfHearts 2 days ago

                                                                                                  I only use regular Gemini and the main feature I care about is absolutely terrible: summarizing YouTube videos. I'll ask for a breakdown or analysis of the video, and it'll give me a very high level overview. If I ask for timestamps or key points, it begins to hallucinate and make stuff up. It's incredibly disappointing that such a huge company with effectively unlimited access to both money and intellectual resources can't seem to implement a video analysis feature that doesn't suck. Part of me wonders if they're legitimately this incompetent or if they're deliberately not implementing good analysis features because it could eat into their views and advertisement opportunities.

                                                                                                  • bahmboo 2 days ago

                                                                                                    I've had it fail when a video does not have subtitles - I'm guessing that's what it uses. I have had good success having it answer the clickbait video titles like "is this the new best thing?"

                                                                                                    It's not watching the video as far as I can tell.

                                                                                                    • foota 2 days ago

                                                                                                      I imagine you'd be paying more in ML costs than YouTube makes off your views.

                                                                                                    • ldjkfkdsjnv 2 days ago

                                                                                                      They have to drop the price because the model is bad. People will pay almost any cost for a model that is much better than the rest. How this company carries on the facade of competence is laughable. All the money on the planet, and they still cannot win on their core "competency".

                                                                                                      • kebsup 2 days ago

                                                                                                        Is there a good benchmark comparing multilingual and/or translation abilities of most recent LLMs? GPT-4o struggles for some tasks in my language learning app.

                                                                                                        • undefined 2 days ago
                                                                                                          [deleted]
                                                                                                          • pacoverdi 2 days ago

                                                                                                            Why the ** do they have to use a 8.9MiB png as hero image? I guess it was generated by AI?

                                                                                                            • msp26 2 days ago

                                                                                                              Has anyone tried Google's context caching feature? The minimum caching window being 32k tokens seems crazy to me.

                                                                                                              • resource_waste 2 days ago

                                                                                                                Its just not as smart as ChatGPT or LLAMA, its mind boggling Google fell so far behind.

                                                                                                                • Mistletoe 2 days ago

                                                                                                                  Has it changed much since March when this was written? Gemini won, 6-4. And it matches my experiences of just using Gemini instead of ChatGPT because it gives me more useful responses, when I feel like I want an AI answer.

                                                                                                                  https://www.tomsguide.com/ai/google-gemini-vs-openai-chatgpt

                                                                                                                  • w23j 2 days ago

                                                                                                                    "For this initial test I’ll be comparing the free version of ChatGPT to the free version of Google Gemini, that is GPT-3.5 to Gemini Pro 1.0."

                                                                                                                    The free version of ChatGPT is 4o now, isn't it? So maybe Gemini has not gotten worse, but the free alternatives are now better? When I compare ChatGPT-4o with Gemini-Advanced (wich is 1.5 Pro, I believe) the latter is just so much worse.

                                                                                                                • phren0logy 2 days ago

                                                                                                                  As far as I can tell there's still no option for keeping data private?

                                                                                                                  • sweca 2 days ago

                                                                                                                    If you are on the pay as you go model your data is exempted from training.

                                                                                                                    > When you're using Paid Services, Google doesn't use your prompts (including associated system instructions, cached content, and files such as images, videos, or documents) or responses to improve our products, and will process your prompts and responses in accordance with the Data Processing Addendum for Products Where Google is a Data Processor. This data may be stored transiently or cached in any country in which Google or its agents maintain facilities.

                                                                                                                    https://ai.google.dev/gemini-api/terms

                                                                                                                  • diggan 2 days ago

                                                                                                                    Makes sense, as soon as your data leaves your computer, it's safe to assume it's no longer private, no matter what promises a service gives you.

                                                                                                                    You want guaranteed private data that won't be used for anything? Keep it on your own computer.

                                                                                                                    • 999900000999 2 days ago

                                                                                                                      Not sure why your getting down voted. Anything sent to an cloud hosted LLM is subject to be publicly released or used in training.

                                                                                                                      Setting up a local LLM isn't that hard, although I'd probably air gap anything truly sensitive. I like ollama, but it wouldn't surprise me if it's phoning home.

                                                                                                                      • phren0logy 2 days ago

                                                                                                                        This is just incorrect. The OpenAI models hosted though Azure are HIPAA-compliant, and Antropic will also sign a BAA.

                                                                                                                        • 999900000999 2 days ago

                                                                                                                          I'm open to being wrong. However for many industries your still running the risk of leaking data via a 3rd party service.

                                                                                                                          You can run Llama3 on prem, which eliminates that risk. I try to reduce reliance on 3rd party services when possible. I still have PTSD from Saucelabs constantly going down and my manager berating me over it.

                                                                                                                          • caseyy 2 days ago

                                                                                                                            You are not technically wrong because a statement "there is a risk of leaking data" is not falsifiable. But your comment is performative cynicism to display your own high standards. For the very vast majority of people and companies, privacy standards-compliant services (like HIPAA-compliant) are private enough.

                                                                                                                            • 999900000999 2 days ago

                                                                                                                              I know my company outright warns us to not share any sensitive information with LLMs, including ones that claim to not use customer data for training.

                                                                                                                              I can flip your statement around. For the vast majority of use cases, LLAMA 3 can be hosted on prem and will have similar performance.

                                                                                                                        • spiralk 2 days ago

                                                                                                                          This is not true. Both OpenAI and Google's LLM APIs have a policy of not using the data sent over them. Its no different than trusting Microsoft's or Google's cloud to store private data.

                                                                                                                          • phren0logy 2 days ago

                                                                                                                            Can you link to documentation for Google's LLMs? I searched long and hard when Gemma 2 came out, and all of the LLM offerings seemed specifically exempted. I'd love to know if that has changed.

                                                                                                                            • spiralk 2 days ago
                                                                                                                              • phren0logy 2 days ago

                                                                                                                                Thanks very much! I think before I looked at docs for Google AI Studio, but also for Google Workspace, and both made no guarantees.

                                                                                                                                From the linked document, so save someone else a click:

                                                                                                                                     > The terms in this "Paid Services" section apply solely to your use of paid Services ("Paid Services"), as opposed to any Services that are offered free of charge like direct interactions with Google AI Studio or unpaid quota in Gemini API ("Unpaid Services").
                                                                                                                  • sweca 2 days ago

                                                                                                                    No Human eval benchmark result?

                                                                                                                    • accumulator 2 days ago

                                                                                                                      Cool, now all Google has to do is make it easier to onboard new GCP customers and more people will probably use it...its comical how hard it is to create a new GCP organization & billing account. Also I think more Workspace customers would probably try Gemini if it was a usage-based trial as opposed to clicking a "Try for 14 days" CTA to activate a new subscription.

                                                                                                                      • jzebedee 2 days ago

                                                                                                                        As someone who actually had to build on Gemini, it was so indefensibly broken that I couldn't believe Google really went to production with it. Model performance changes from day to day and production is completely unstable as Google will randomly decide to tweak things like safety filtering with no notice. It's also just plain buggy, as the agent scaffolding on top of Gemini will randomly fail or break their own internal parsing, generating garbage output for API consumers.

                                                                                                                        Trying to build an actual product on top of it was an exercise in futility. Docs are flatly wrong, supposed features are vaporware (discovery engine querying, anybody?), and support is nonexistent. The only thing Google came back with was throwing more vendors at us and promising that bug fixes were "coming soon".

                                                                                                                        With all the funded engagements and credits they've handed out, it's at the point where Google is paying us to use Gemini and it's _still_ not worth the money.

                                                                                                                        • wewtyflakes 2 days ago

                                                                                                                          > Docs are flatly wrong

                                                                                                                          This +999; I couldn't believe how inconsistent and wrong the docs were. Not only that, but once I got something successfully integrated, it worked for a few weeks then the API was changed, so I was back to square one. I gave it a half-hearted try to fix it but ultimately said 'never again'! Their offering would have to be overwhelmingly better than Anthropic and OpenAI for me to consider using Gemini again.

                                                                                                                          • ldjkfkdsjnv 2 days ago

                                                                                                                            The engineers at google are bad, they keep hiring via pure leetcode. Cant ship working products

                                                                                                                            • victor106 2 days ago

                                                                                                                              Same experience here.

                                                                                                                              I had hopes of Google able to compete with Claude and OpenAI. But I don’t think that’s the case. Unless they come out with a product that’s 10x better in the next year or so I think they lost the AI race.

                                                                                                                              • undefined 2 days ago
                                                                                                                                [deleted]
                                                                                                                              • mixtureoftakes 2 days ago

                                                                                                                                TLDR - 2x cheaper, slightly smarter, and they only compare those new models to their own old ones. Does google have moat?

                                                                                                                                • usaar333 2 days ago

                                                                                                                                  The math score exceeds o1-preview (though not mini or o1 full) fwiw.

                                                                                                                                  • cj 2 days ago

                                                                                                                                    Moat could be things like direct integration into Gmail (ask it to find your last 5 receipts from Amazon), Drive (chat with PDF), Slides (create images / flow charts), etc.

                                                                                                                                    Not sure if their models are the moat. But they definitely have an opportunity from the productization perspective.

                                                                                                                                    But so does Microsoft.

                                                                                                                                    • svara 2 days ago

                                                                                                                                      Have you tried the Gemini Gmail integration? I have that enabled in my GSuite account.

                                                                                                                                      It's incredible how bad it is. I've seen it claim I've never received mail from a certain person, while the email was open right next to the chat widget. I've seen it tell me to use the standard search tool, when that wasn't suitable for the query. I've literally never had it find anything that wouldn't have been easier to find with the regular search.

                                                                                                                                      I mean, it's a really obvious thing for them to do, I'm genuinely confused why they released it like that.

                                                                                                                                      • cj 2 days ago

                                                                                                                                        > I'm genuinely confused why they released it like that.

                                                                                                                                        I agree. Right now it's not very useful, but has the potential to be if they keep investing in it. Maybe.

                                                                                                                                        I think Google, Microsoft, etc are all pressured to release something for fear of appearing to be behind the curve.

                                                                                                                                        Apple is clearly taking the opposite approach re: speed to market.

                                                                                                                                        • svara 2 days ago

                                                                                                                                          Yeah - The thing though is, you could build the same thing better in a day's work by using OpenAI's API, or Gemini's for that matter.

                                                                                                                                          I wonder if there isn't a deeper, more worrying (for Google) reason behind that - that AI is killing their margin.

                                                                                                                                          Google has always been about delivering top notch services, and winning by being able to do that cheaper than the competition.

                                                                                                                                          It's "in their DNA" - everyone knows that using links to a website as a quality signal was a really good idea in the early days of Google, but what's a little less well known is that the true stroke of genius was the algorithmic efficiency of PageRank.

                                                                                                                                          Similarly for GMail. Remember when it launched, 1 GB of free storage was just completely out of every competitor's league?

                                                                                                                                          It may just be that this recipe of being smarter than everyone on algorithms and on datacenter operations might just not work anymore in the age of modern machine learning.

                                                                                                                                          • knowriju a day ago

                                                                                                                                            The problem with current crop of LLM models is that it makes for a great demo. I am also confident that you can build a working prototype for GMail, Outlook or any other surface. But I am equally confident it will be a massively different ballgame to role it out to a billion users. You'll run into a lot of edge cases and have to take care of a lot of adversarial scenarios as well. Pretty sure that's the same issue Apple is running into as well, and why they have had to postpone rollouts.

                                                                                                                                            • svara a day ago

                                                                                                                                              I don't buy that at all. They've literally shipped a broken, useless product that this amateur could do better (yes, as a demo).

                                                                                                                                              All the hard scalability stuff, they've already done before. Gmail exists, the Gemini API exists.

                                                                                                                                              If they're not getting it to work, there must be another reason. They just can't afford to provide it at a price point that users accept.

                                                                                                                                      • ethbr1 2 days ago

                                                                                                                                        Doesn't Microsoft also get OpenAI IP, if they run out of money?

                                                                                                                                      • re-thc 2 days ago

                                                                                                                                        > Does google have moat?

                                                                                                                                        Potentially (depends if the EU cares)...

                                                                                                                                        E.g. integration with Google search (instead of ChatGPT's Bing search), providing map data, android integration, etc...

                                                                                                                                      • undefined 2 days ago
                                                                                                                                        [deleted]
                                                                                                                                      • thekevan 2 days ago

                                                                                                                                        Google does not miss one single opportunity to miss an opportunity.

                                                                                                                                        They announced a price reduction but it "won't be available for a few days". By the time, the initial hype will be over and the consumer-use side of the opportunity to get new users will be lost in other news.

                                                                                                                                        • undefined 2 days ago
                                                                                                                                          [deleted]
                                                                                                                                        • undefined 2 days ago
                                                                                                                                          [deleted]
                                                                                                                                          • pie420 2 days ago

                                                                                                                                            [flagged]

                                                                                                                                            • undefined 2 days ago
                                                                                                                                              [deleted]
                                                                                                                                            • jmah 2 days ago

                                                                                                                                              [flagged]

                                                                                                                                              • undefined 2 days ago
                                                                                                                                                [deleted]