The model you linked is not an LLM either by architecture or size.
A few thoughts:
1) TensorRT anything isn’t an option because it requires Nvidia GPUs.
2) The serving frameworks you linked likely don’t support the architecture of this model, and even if they did they have varying levels of support for CPU.
3) I’m not terribly familiar with Hetzner but those instance types seem very low-end.
The model you linked has already been converted to ONNX. Your best bet (probably) is to take the ONNX model and load it in Triton Inference Server. Of course Triton is focused on Nvidia/CUDA but if it doesn’t find an Nvidia GPU it will load the model(s) to CPU. You can then do some performance testing in terms of requests/s but prepare to not be impressed…
Then you could look at (probably) int8 quantization of the model via the variety of available approaches (ONNX itself, Intel Neural Compressor, etc). With Triton specifically you should also look at Openvino CPU execution accelerator support. You will need to see if any of these dramatically impact the quality of the model.
Overall I think “good, fast, cheap: pick two” definitely applies here and even implementing what I’ve described is a fairly significant amount of development effort.
Well, looking at Triton Inference Server + OpenVINO backend [1]...uff... as you said: "significant amount of development effort". Not easy to handle when you do it first time.
Is ONMX runtime + OpenVINO [2] a good idea ? Seems easier to install and to use: Pre-built Docker image and Python package... Not sure about performance (the hardware-related performance improvements - they are in OpenVINO anyway, right?).
[1] https://github.com/triton-inference-server/openvino_backend
[2] https://onnxruntime.ai/docs/execution-providers/OpenVINO-Exe...
Hugging Face does maintain a package named Text Embedding Inference (TEI) with GPU/CPU-optimized container images. While I have only used this for hosting embedding models, it does appear to support Roberta architecture classifiers (specifically sentiment analysis).
https://github.com/huggingface/text-embeddings-inference
You can always run a zero shot pipeline in HF with a simple Flask/FastAPI application.
thanks for Text Embedding Inference (never heard about it before)
> you can always run a zero shot pipeline in HF with a simple Flask/FastAPI application.
Yeah, sometimes things that are right in front of your nose, you don't see them. you mean this? https://huggingface.co/docs/api-inference/index
Setfit