• Sn0wCoder 2 hours ago

    Very neat, bookmarked. Read through the readme.md and would be nice to show an example of using with Ollama or LM Studio running locally. Would guess you just setup an OpenAI model, set the baseUrl to the local instance / API and go from there? For something that is Fully Local connecting to a provider with an API key is iffy at best, unless you are paying for enterprise keys. I guess the first line says used by ‘Enterprises’ but to be fully local that would include running the LLM locally also?

    • nip 2 hours ago

      Incredible work!

      I've been meaning to add "intelligence" to my Telegram monitoring bot: it couldn't come at a better time!

      Thank you for building it!

      • maeil 3 hours ago

        A more fleshed out competitor to llm-interface [1]? Looks great!

        Does it run in FaaS/serverless environments out of the box? Lambdas, Cloudflare Workers, Vercel functions and the likes? Deno? The README says "Isomorphic - works everywhere", but might be nice to make this more explicit.

        [1] https://github.com/samestrin/llm-interface

        • vbo 3 hours ago

          Looks good, might give it a try. I was looking for something similar to provide a unified interface for gpt and claude and eventually hacked something together myself, as none of the solutions I found could deal with structured output properly across both vendors.

          • hirako2000 an hour ago

            What do you mean by "no proxy"?

            • madarco 14 minutes ago

              I think it's because the library calls directly the LLM AND saves any debug/trace info. While other tools let you use the standard (eg OpenAI) sdk and uses an https proxy to intercept the requests

            • kazcaptain 4 hours ago

              Great lib, will give it a go in my work

              • ArshDilbagi 3 hours ago

                thank you

              • alex_suzuki 3 hours ago

                what‘s a „Super SDK“? A meta sdk, i.e. an SDK wrapping other SDKs/APIs?

                • ArshDilbagi 3 hours ago

                  We have implemented “super-types” that are strongly typed in TypeScript and compatible with all providers (think a superset in mathematics). Hence the name, Super SDK. Gateway is super light weight. We don’t wrap around other SDKs. We describe the APIs in the provider packages (@adaline/openai) that plugs into @adaline/gateway that has all the features.

                  • alex_suzuki 3 hours ago

                    Got it, thanks!

                • npace12 4 hours ago

                  that's a lot of work and it looks nicely done, thanks!

                  • ArshDilbagi 3 hours ago

                    thank you

                  • BoorishBears 4 hours ago

                    Can tools return images to the LLM using this SDK? (where supported by the provider)

                    • ArshDilbagi 3 hours ago

                      Yes, we support images, tools, response format (for OpenAI) and everything else.

                      • BoorishBears 3 hours ago

                        I'm asking specifically about images being returned by a tool call. I didn't see any indication that it's supported skimming through the Zod types

                    • ArshDilbagi 6 hours ago

                      The fully local production-grade Super SDK that provides a simple, unified, and powerful interface for calling more than 200+ LLMs.

                      - Production-ready and used by enterprises. - Fully local and NOT a proxy. You can deploy it anywhere. - Comes with batching, retries, caching, callbacks, and OpenTelemetry support. - Supports custom plugins for caching, logging, HTTP client, and more. You can use it like LEGOs and make it work with your infrastructure. - Supports plug-and-play providers. You can run fully custom providers and still leverage all the benefits of Adaline Gateway.

                      Features Strongly typed in TypeScript Isomorphic - works everywhere 100% local and private and NOT a proxy Tool calling support across all compatible LLMs Batching for all requests with custom queue support Automatic retries with exponential backoff Caching with custom cache plug-in support Callbacks for full custom instrumentation and hooks OpenTelemetry to plug tracing into your existing infrastructure Plug-and-play custom providers for local and custom models