• cjonas 20 minutes ago

    All these libraries are to trying to do too much. The "batteries included" approach makes for great demos, but falls apart for any real application

    • fitzgera1d an hour ago

      There’s an MCP Apps version of this that is interesting: https://creature.run

      Maybe I’m misunderstanding, but isn’t generating UI just-in-time kind of risky because AI can get it wrong? Whereas you can generate/build an MCP App once that is deterministic, always returns a working result, and just as AI native.

      • milst 29 minutes ago

        With this you build your own React components and register them with the AI. The AI chooses which to use and what props to pass into them, so it's not generating UI from scratch, if that's what you mean - other Michael from the Tambo team

      • dzogchen an hour ago

        I don’t understand what this does. Who would use this and why? I need an ELIF.

        Edit: Announcement was more clear https://tambo.co/blog/posts/introducing-tambo-generative-ui

        Can it also generate new components?

        • grouchy 25 minutes ago

          You install the React SDK, register your React components with Zod schemas, and then the agent responds to users with your UI components.

          Developers are using it to build agents that actually solve user needs with their own UI elements, instead of text instructions or taking actions with minimal visibility for the user.

          We're building out a generative UI library, but as of right now it doesn't generate any code (that could change).

          We do have a skill you can give your agent to create new UI components:

          ``` npx skills add tambo-ai/tambo ```

          /components

        • avaer 2 hours ago

          Since I didn't see it in the Readme, how does this compare to something like Google's A2UI? Seems like it's doing more, but could e.g. Tambo work on top of A2UI protocol or is it a different beast?

          My agents need a UI and I'm in the market for a good framework to land on, but as is always the case in these kinds of interfaces I strongly suspect there will be a standard inter-compatible protocol underlying it that can connect many kinds of agents to many kinds of frontends. What is your take on that?

          • lachieh 17 minutes ago

            Hey! I'm an the Tambo team so I'll chip in. There isn't really any reason we couldn't support A2UI. It's a great way to allow models to describe generative UIs. We could add an A2UI renderer.

            The way we elevator-pitch Tambo is "an agent that understands your UI" (which, admittedly, is not very descriptive on the implementation details). We've spent the time on allowing components (be that pre-existing or purpose-built) to be registered as tools that can be controlled and rendered either in-chat, or out within your larger application. The chat box shouldn't be the boundary.

            Personally, my take on standards like A2UI is that they could prove useful but the models have to easily understand them or else you have to take up additional context explaining the protocol. Models already understand tool-calling so we're making use of that for now.

          • krashidov an hour ago

            congrats on the launch! we're building type.com and we would love to use this - shoot me an email: k at type dot com

            our use case is to allow other users to build lightweight internal apps within your chat workspace (say like an applicant tracking system per hire etc.)

            • grouchy 23 minutes ago

              Thank you. I just sent you an email. Looking forward to learning more about what you are building.

            • deep_origins 3 hours ago

              Big fan of Tambo and what the team has built. Started using it on a couple side projects and being able to use the zod schemas as source of truth for llm structured outputs is handy.

              • grouchy 20 minutes ago

                Awesome to meet another tambonaut.

                We love zod, we also support standard schema and thus most other popular typing libraries.

                I'm curious how you found us?

              • danialtz 4 hours ago

                impressive!

                is this the same category to CopilotKit? CPK is a AGUI proxy for similar topics, but here seems to be more emphasis on linked components?

                • grouchy 16 minutes ago

                  There's overlap for sure. I'd say we've built a more drop-in solution. We actually migrated to AG-UI events under the hood, and we have plans to expand cross-compatibility across standards.

                  The major difference is we provide an agent. You don't need to bring your own agent or framework. A lot of our developers are using our agent, really happy with it, and we have a bunch of upcoming features to make it even better out of the box.

                • jauntywundrkind 4 hours ago

                  Is there any interest or discussion about finding a way to use these tools to work with MCP Apps?

                  Release: http://blog.modelcontextprotocol.io/posts/2026-01-26-mcp-app... . Announcement: http://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-app... . Submission: https://news.ycombinator.com/item?id=46020502

                  • grouchy 12 minutes ago

                    Yeah, we've been following it closely. We already support the majority of the MCP spec and plan to add support for UI over MCP.

                    But our use case is a little different. MCP Apps embed interfaces into other agents. Tambo is an embedded agent that can render your UI. There's overlap for sure, but many of the developers using us don't see themselves putting their UI inside ChatGPT or Claude. That's just not how users use their apps.

                    That said, we're thinking about how we could make it easy to build an embedded agent and then selectively expose those UI elements over MCP Apps where it makes sense.