You may get a letter from Apple’s lawyers because of the name - Swift and SwiftUI are trademarks, and this seems like something they’d want to keep for themselves.
> Swift and SwiftUI are trademarks
This is called SwiftAI, though.
Right, and if it were called IBM AI, they’d get a letter from IBM instead. You can’t just tack something onto the end of a trademarked brand name.
If they can prove enough similarities or overlap with their brand, they’ll find the way. And since it targets macOS/iOS specifically, there you go.
Als considering that Apple probably has Apple Intelligence trademarked
Awesome, this is a good idea! Having a nice wrapper to make LLM calls easier is very helpful too :)
Nice to see someone digging in on the system models. That's on my list to play with, but I haven't seen much new info on them or how they perform yet.
We’ve begun internally evaluating the model and will share our findings more in details later. So far, we’ve found that it performs well on tasks such as summarization, writing, and data extraction, and shows particular strength in areas like history and marketing. However, it struggles with STEM topics (e.g., math and physics), often fails to follow long or complex instructions, and sometimes avoids answering certain queries. If you want us to evaluate a certain use case or vertical, please share it with us!
Mind me asking what your thoughts are on the overall quality of Apple's on-device LLMs? I've found that LanguageModelSession always returns very lengthy responses:
https://developer.apple.com/forums/thread/789182?answerId=85...
I do a lot of AI work and right now the story for doing LLMs on iOS is very painful (but doing Whisper or etc is pretty nice) so this is existing and the API looks Swift native and great, I can't wait to use it!
Question/feature request: Is it possible to bring my own CoreML models over and use them? I honestly end up bundling llama.cpp and doing gguf right now because I can't figure out the setup for using CoreML models, would love for all of that to be abstracted away for me :)
That’s a good suggestion, and it indeed sounds like something we’d want to support. Could you help us better understand your use case? For example, where do you usually get the models (e.g., Hugging Face)? Do you fine-tune them? Do you mostly care about LLMs (since you only mentioned llama.cpp)?
Thank you! I’ve been fine tuning tiny Llama and Gemma models using transformers then exporting from the safetensors that spits out — My main use case is LLMs but I’ve also tried getting YOLO finetuned and other PyTorch models running and ran into similar problems, just seemed very confusing to figure out how to properly use the phone for this.
Needs more example on custom.
Thanks for the feedback! When you say “custom,” do you mean additional integrations with LLM providers, or more documentation on how to build your own custom integration? If you mean the former, we’re currently focused on stabilizing the API and reaching feature parity with FoundationModels (e.g., adding streaming). After that, we plan to add more integrations, such as Claude, Gemini, and on-device LLMs from Hugging Face.
Another vibe coded.