Couple of years ago I'd look at those Chinese-text diagrams and feel sad and isolated because they represent knowledge that is too foreign for me to read. Nowadays it may as well just be another piece of well-written technical documentation. We are truly lucky to live in this age of wonders and this is really only the opening phase of LLMs as a technology.
Are there general LLMs that can translate text 'on image', as opposed to spitting out the translation as text? I feel like it would be easier to reason about in images that are very text-heavy like some of these diagrams, but from what I remember, last time I tried, chatgpt and claude would only give me a text translation
In this specific case , it is a svg, so you can ask to translate the svg source.
Google Lens does exactly that.
didn't realize that since the last time i looked at these docs, seems like they've added lots of nice block diagrams for all the different parts of the machine. neat!
Maybe I’m dating myself but when I read “Support Vector” I assumed it was talking about SVM models.