• 6thbit an hour ago

    This looks neat, we certainly need more ideas and solutions on this space, I work with large codebases daily and the limits on agentic contexts are constantly evident. I've some questions related to how I would consume a tool like this one:

    How does this fare with codebases that change very frequently? I presume background agents re-indexing changes must become a bottleneck at some point for large or very active teams.

    If I'm working on a large set of changes modifying lots of files, moving definitions around, etc., meaning I've deviated locally quite a bit from the most up to date index, will Nia be able to reconcile what I'm trying to do locally vs the index, despite my local changes looking quite different from the upstream?

    • jellyotsiro 38 minutes ago

      great question!

      For large and active codebases, we avoid full reindexing. Nia tracks diffs and file level changes, so background workers only reindex what actually changed. We are also building “inline agents” that watch pull requests or recent commits and proactively update the index ahead of your agent queries.

      Local vs upstream divergence is a real scenario. Today Nia prioritizes providing external context to your coding agents: packages, provider docs, SDK versions, internal wikis, etc. We can still reconcile with your local code if you point the agent at your local workspace (cursor and claude code already provide that path). We look at file paths, symbol names and usage references to map local edits to known context. In cases where the delta is large, we surface both the local version and the latest indexed version so the agent understands what changed.

    • mritchie712 32 minutes ago

      Cursor promises to do this[0] in the product, so, especially on HN, it'd be best to start with "why this is better than Cursor".

      > favorite doc sites so I do not have to paste URLs into Cursor

      This is especially confusing, because cursor has a feature for docs you want to scrape regularly.

      0 - https://cursor.com/docs/context/codebase-indexing

      • jellyotsiro 30 minutes ago

        The goal here is not to replace Cursor’s own local codebase indexing. Cursor already does that part well. What Nia focuses on is external context. It lets agents pull in accurate information from remote sources like docs, packages, APIs, and broader knowledge bases

      • zwaps an hour ago

        Absolutely insane that we celebrated coding agents getting rid of RAG, only with the next innovation being RAG

        • jellyotsiro an hour ago

          Not exactly just RAG. The shift is agentic discovery paired with semantic search.

          Also, most of the coding agents still combine RAG and agentic search. See cursor blog about how semantic search helps them understand and navigate massive codebases: https://cursor.com/blog/semsearch

          • govping an hour ago

            The context problem with coding agents is real. We've been coordinating multiple agents on builds - they often re-scan the same files or miss cross-file dependencies. Interested in how Nia handles this - knowledge graph or smarter caching?

            • jellyotsiro 43 minutes ago

              hey! knowledge graphs are also used at runtime but paired with other techniques, since graphs are only useful for relationship queries.

            • choilive an hour ago

              The pendulum swings back.

              • ModernMech 23 minutes ago

                This is happening over and over and over. The example of prompt engineering is just a form of protocol. Context engineering is just about cache management. People think LLMs will replace programming languages and runtimes entirely, but so far it seems they have been used mostly to write programs in programming languages, and I've found they're very bad interpreters and compilers. So far, I can't really pick out what exactly LLMs are replacing except the need to press the individual keys on the keyboard, so I still struggle to see them as more than super fancy autocomplete. When the hype is peeled away, we're still left with all the same engineering problems but now we have added "Sometimes the tool hallucinates and gaslights you".

              • RomanPushkin an hour ago

                Having this RAG layer was always another thing to try for me. I haven't coded it myself, and super interested if this gives a real boost while working with Claude. Curious from anyone who have already tried the service, what's your feedback? Did you feel you're getting real improvements?

                • jellyotsiro an hour ago

                  Wouldn’t call it just RAG though. Agentic discovery and semantic search are the way to go right now, so Nia combines both approaches. For example, you can dynamically search through a documentation tree or grep for specific things.

                  • zwaps an hour ago

                    We call it agentic RAG. The retriever is an agent. It’s still RAG

                    • jellyotsiro an hour ago

                      Which would be much better than the techniques used in 2023. As context windows increase, combining them becomes even easier.

                      There are a lot of ways of how you can interpret agentic rag, pure rag, etc

                • ModernMech 28 minutes ago

                  Hard to follow gif of the thing working without explanation: check

                  Carousel of a bunch of random companies "using" the product without an indication how or in what capacity: check

                  List of a bunch of investors, as if that's meaningful to anyone who will use this product rather than people who would invest in it: check

                  The audacity to ask for actual money for a product that barely exists and is mostly a wrapper around other technology, who are investing in you: check

                  Claims of great internal success without any proof: check

                  Testimonials from random Twitter accounts who may or may not be bots or paid, who knows: check

                  To try it or even get a sense of how it works or what it is, you have to sign up: check

                  Congrats! Looks like you're set up for a trillion dollar valuation in the AI space!

                  To be less flippant and more constructive: if you're going to say the thing reduces hallucinations and provides 10x speedups in development, you need to provide proof immediately, or stop making the claim, otherwise there's 0 credibility for this product.

                  • johnsillings 33 minutes ago

                    super smart. congrats on the launch!

                    • himike 33 minutes ago

                      I love Nia, keep it up Arlan

                      • jellyotsiro 30 minutes ago

                        thank you haha!

                      • ramzirafih 29 minutes ago

                        Love it.