« BackThe Parallel Search APIparallel.aiSubmitted by lukaslevert 6 hours ago
  • ddp26 an hour ago

    Hi Parag, congrats on the launch. We'll try this out at FutureSearch.

    I agree there is a need for such APIs. Using Google or Bing isn't enough, and Exa and Brave haven't clearly solved this yet.

    • srameshc 3 hours ago

      I like Parallel and been using it for tests but I am not sure about the terms.

      > The materials displayed or performed or available on or through our website, including, but not limited to, text, graphics, data, articles, photos, images, illustrations and so forth (all of the foregoing, the “Content”) are protected by copyright and/or other intellectual property laws. You promise to abide by all copyright notices, trademark rules, information, and restrictions contained in any Content you access through our website, and you won’t use, copy, reproduce, modify, translate, publish, broadcast, transmit, distribute, perform, upload, display, license, sell, commercialize or otherwise exploit for any purpose any Content not owned by you, (i) without the prior consent of the owner of that Content or (ii) in a way that violates someone else’s (including Parallel's) rights.

      • pegasus 3 hours ago

        IANAL but think this is to remind you that fragments of text it returns to you after pulling them from various sites in response to your query are protected by whatever copyright notices might be found on those websites. Seems reasonable to me.

      • tcdent 4 hours ago

        Search accuracy, when used in the context of an agent, is so important because when you are delivered search results which are incorrect, the agent tends to interpret them as fact because they come from a "credible" source. So, this is very much an industry that still has plenty of room for improvement, and I'm excited to see how this product performs.

        • BinaryIgor 5 hours ago

          Interesting, but I'm not totally convinced that searching for LLMs is different than for us (humans). In the end, we both want to get information that's relevant to our query (intent). Besides, I wonder whether there will be able to convince big players like OpenAI to use them, instead of Google Search with its proven record :)

        • riskable 4 hours ago

          I've been saying for quite some time now that AI is going to kill the traditional (free) search engine. This is just another nail in the coffin.

          When an AI searches google.com for you, the ads never get shown to the user. Search engines like kagi.com are the future. You'll give the AI your Kagi API key and that'll be it. You won't even need cloud-based AI for that kind of thing! Tiny, local models trained for performing searches on behalf of the user will do it instead.

          Soon your OS will regularly pull down AI model updates just like it pulls down software updates today. Every-day users will have dozens of models that are specialized for all sorts of tasks—like searching the Internet. They won't even know what they're for or what they do. Just like your average Linux user doesn't know what the `polkit` or `avahi-daemon` services do.

          My hope: This will (eventually) put pressure on hardware manufacturers to include more VRAM in regular PCs/consumer GPUs.

          • lukaslevert 2 hours ago

            There are very broad consequences for a world that no longer accesses the web primarily through Google Search. We're building for that too!

            • stephantul 4 hours ago

              I fully agree, except that I think this will still be a very “power user” thing. Perhaps this is also what you mean because you reference Linux. But traditional search will be very important for a very long while, imo

              • gethly 4 hours ago

                > AI is going to kill the traditional (free) search engine

                Yes, this has been issue for for many content creators. I predict that because of this, a lot of internet will get behind a paywall. I run one, so I hope the future is bright, but overall this is very bad for the internet because it was never intended to be used this way. Sure, it will be great for users to save unimaginable amount of time searching manually, but if websites lose traffic, well...that is the end of the internet as we know it.

                • riskable 3 hours ago

                  Inflation might help the situation; by making microtransactions a more realistic prospect. However, what would really help would be to end Visa, MasterCard, and American Express's monopoly on payments—where they extract at least $0.30 out of every transaction.

                  I used to work for the credit card industry like 15 years ago (damn, I feel old now). Back then, you know how much a credit card transaction actually cost (them)? $0.00001 (or something like that). That accounts for all the people they had working for them, the infrastructure, the servers, etc. It'd be even less today.

                  There's no reason for them to exist. The government should just setup a central bank transfer system with unlimited free transactions already. Or even better: Mandate that banks can't charge fees for transactions. Not to consumers or businesses! They already make enough money to more than make up for it (Source: I work for a bank and transaction fees are nothing but pure profit since there's basically zero cost associated with them).

                  • gethly 2 hours ago

                    I absolutely agree. I have designed the platform to use wallets, so I never involve a third party in my business or the business of the content creators and risk being financially deplatformed(famously often done by Stripe and Paypal). I wanted to give users a chance to use payment cards to deposit money into their wallets, as people are used to paying online with cards, but as the platform provides no service in return, this was incompatible with policies of payment processors and card providers. So users have to make a bank transfer. Thankfully European SEPA payments are nowadays wide-spread and can be instant. People have banking apps on their phones, so it is even faster than using a card. But the use of cards for online payments is seeded too deep for modern users to find this comfortable, yet. Anyhow, I think we are slowly moving away from cards and in time they will hopefully become a thing of the past as internet has been around for ages and cards fulfil absolutely no useful function that cannot be supplement by decentralised solution by modern banks.

                    • tyre 2 hours ago

                      I agree that they are a cartel tax on the economy, but their costs are higher than that. They are also taking on risk from credit. If your card gets stolen, the thief buys a $3k surfboard, and then you get refunded, they are out the $3k.

                      They are also paying for the rewards on top of the points given out.

                      Again, not saying they’re not making a ton of profit. It’s higher than you’ve said, though.

                      • gethly 2 hours ago

                        You are mixing debit and credit cards here. Debit cards have essentially no protection, only credit cards as they are literally loans and lenders invest in protection of their debtors to make them popular.

                      • Xss3 2 hours ago

                        Chances are it could be similarly expensive because cobol devs are more expensive now? Is very old but still scalable infrastructure really much cheaper to run now?

                      • paragagrawal 2 hours ago

                        I worry about this too. Some thoughts on how we plan to tackle this challenge are here: https://parallel.ai/about

                        • NitpickLawyer 3 hours ago

                          > that is the end of the internet as we know it.

                          Eh. Some of us remember an internet before the free-with-advertising became the norm. In the 90s and early 2000s people were putting stuff online for free with no desire to monetise that content. And it was way more expensive back then to do so. Today you can host a personal blog for less than a coffee. I for one wouldn't mind going back to people sharing stuff for the fun of it, isntead of the myriad of content that's only there to promote/sell/advertise for this and that.

                          • gethly 2 hours ago

                            I remember. But you forgot one fact - the amount of users online was miniscule compared to what we have today. That is the bane of everything - saturation. You keep diluting a good thing until only a faint memory of it remains.

                      • bfeynman 4 hours ago

                        the need for more web search indices is indeed dire given landscape with agents and providers turning into walled gardens means that independent ones are definitely going to be needed, but just seems insurmountable when building actual index is so costly. Maybe just purely pareto efficient of serving 80% of requests or something is good enough.

                        • paragagrawal 2 hours ago

                          not insurmountable

                        • nahnahno 5 hours ago

                          The fact that GPT-4.1 was the judge does not convince of the validity of the bench.

                          • ripped_britches 4 hours ago

                            It’s probably just that they started before gpt 5 was released. It’s a good judge.

                            • tacoooooooo 4 hours ago

                              it's an odd choice. I'd be curious why they picked that. it's not the cheapest, most expensive, best, or worst.

                              It does have a relatively large context window, and ime is very good at format adherence

                          • aabhay 5 hours ago

                            The latency of 5s for the basic tier search request is very confusing to me. Is that 5s per request or 5s per 1k requests? If it is indeed 5s per request that seems like a deal breaker

                            • pegasus 2 hours ago

                              This is a search agent available in the cloud. The site mentions that they doesn't optimize for being "done in milliseconds and as cheaply as possible", and that they do a lot more work like extracting relevant paragraphs and "Single-call resolution for complex queries that normally require multiple search hops" and more. Geared to be consumed by other agents, hence the latency may be tolerable. They have the advantage of running the agent code close to the index so less expensive searches. Basically, this is something in between a simple google search and a "deep research" or at least "thinking" LLM call.

                              • paragagrawal 2 hours ago

                                In agentic use cases, we save on end-to-end latency by spending more time and compute on individual searches. This happens because agents do fewer searches, use fewer tokens, and end up using fewer thinking tokens when using the Parallel Search API.

                            • hartator 5 hours ago

                              Congrats on the launch!

                              • gm678 4 hours ago

                                Same pricing as Google search APIs, for what it's worth

                                • apsurd 4 hours ago

                                  Human | AI toggle is cool.

                                  Obligatory: information-dense format is valuable for humans too! But the entire Internet is propped up by ads so seems we can't have nice things.