If anyone wants to try this, work through the artithmetic, it’s incredibly easy (and a fun Saturday morning exercise if you’re into this kind of thing) to code up on ShaderToy. From scratch is fun, but if you need a hint to get started I just made one https://www.shadertoy.com/view/Mc3cW2 and there are a bunch of super clever text hacks other people have done like this Matrix in less than 300 characters https://www.shadertoy.com/view/llXSzj or green CRT display effect https://www.shadertoy.com/view/XtfSD8. Loads of other examples abound if you look around.
I've never been able to make text look good at small sizes whenever I've tried immediate mode text rendering. Even in the first shadertoy, in vec2(30, -30), if you change 30 to 300, you'll see some artifacts. Is there a trick to getting that right? For me, multisampling the texture inside fragment shader appears to work the best, although it still isn't as good as the state of the art.
Yeah for that you want to do something better than the nearest neighbor sampling I did there. Multisampling can help but there are definitely some other alternatives. This is where the texture/atlas method the author avoided comes in very handy, because it typically comes with mip-mapping which will help small text look good. There are also analytic stroke drawing methods, though even that isn’t perfect (it always depends on your choice of filter and what actual display you use, and what your goals are, etc.)
ShaderToy comes with a texture atlas built in. I have one or two examples of that, for example https://www.shadertoy.com/view/ltBfDD In addition to mipmap textures, there are other pseudo antialiasing methods people use on ShaderToy, for example when doing 2d stuff you can use the pixel derivative to make 1 pixel wide blending functions, and use it to antialias hard edges. Example https://www.shadertoy.com/view/MtyyRc
I work in game dev in Unity and oh boy are they all backwards.
Some years ago they bought the best font rendering tool that a guy had written and incorporated it natively. Of course after that all work on it pretty much stopped and it killed the competitive font rendering market.
The other day I wanted to try and make an app that looks as if its using a native console font and I had to fiddle for 2+ hours to just get it to about 90% of the way there.
offtopic but interesting, matrix effect in HTML/CSS/JS, 1024 bytes,
``` <head><style>{margin:0;padding:0;line-height:1;overflow:hidden;}div{width:1em;position:absolute;}</style><script> w=window;n=w.innerWidth;m=w.innerHeight;d=document;q="px";function z(a,b){return Math.floor(Math.random()(b-a)+a)}f=" 0123456789";for(i=0;i<45;i++)f+=String.fromCharCode(i+65393);function g(){for(i=0;i<90;i++){r=d.createElement("div");for(j=z(20,50);j;j--){x=d.createElement("pre");y=d.createTextNode(f[z(0,56)]);x.appendChild(y);x.style.opacity=0;r.appendChild(x)}r.id="r"+i;r.t=z(-99,0);with(r.style){left=z(0,n)+q;top=z(-m,0)+q;fontSize=z(10,25)+q}d.body.appendChild(r);setInterval("u("+i+")",z(60,120))}}function u(j){e=d.getElementById("r"+j);c=e.childNodes;t=e.t+1;if((v=t-c.length-50)>0){if((e.style.opacity=1-v/32)==0){for(f in c)if(c[f].style)c[f].style.opacity=0;with(e.style){left=z(0,n)+q;top=z(-m/2,m/2)+q;opacity=1}t=-50}}e.t=t;if(t<0||t>c.length+12)return;for(f=t;f&&f>t-12;f--){s=1-(t-f)/16;if(f<c.length&&c[f].style){c[f].style.opacity=s;}}} </script><body text=#0f0 bgcolor=#000 onload=g()> ```
This is delightfully clever and hacky (so basically like every 3d rendering technique ever) but the end result isn't exactly beautiful unless you're trying to recreate an old school electronic billboard. You could improve it by adding more bits, but long before it starts to look good you'd be searching for an easier way to handle setting all the bits... And there's almost certainly no more efficient solution than using black and white pixels in a drawing program then saving the results in a texture. So, full circle.
If anyone is interested in the a more common way that modern 3d rendering engines draw text, look up SDF text (and related techniques like MSDF etc.). This uses a traditional texture atlas in a preprossing step to create an atlas of signed distance fields.
> So, full circle
In case anyone hasn't yet seen the 1968 full circle paper, "On the Design of Display Processors": http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland...
Hardware in their case, but we also have software saṃsāra.
> If anyone is interested in the a more common way that modern 3d rendering engines draw text, look up SDF text (and related techniques like MSDF etc.).
This is now at least a generation out of date. Pretty much everyone is now using approaches like https://sluglibrary.com, where one directly rasterises the font bezier curves in a shader.
I cooked up a (very basic) version of the concept a while back: https://www.shadertoy.com/view/sdXBDs
It is pretty clever for debug text if, for instance, textures aren't uploading properly. But uh... while it's cute that the OP compares sprite sheets to 16th century manual typesetting, the reality is that it took a printer's assistant an hour to layout a broadsheet of tiny metal slugs on a press, and it takes oh < 10ms to upload a spritesheet to a GPU, which is then infinitely configurable.
Not saying it's not a neat trick, it is.
There's also the option of rendering text as meshes.
TextMeshPro goes one step further and uses signed distance fields to handle arbitrary scale.
https://docs.unity3d.com/Packages/com.unity.textmeshpro@4.0/...
Going one step further still there's the option of evaluating font curves directly on the GPU, which can be high quality regardless of scale or perspective. That turns out to be very difficult to do efficiently but it can be done.
Meshes and SDFs are much simpler on the GPU side but scaling them up too much can compromise accuracy, and scaling meshes down too much can introduce aliasing.
Another example of this by Evan Wallace (founder of Figma): https://medium.com/@evanwallace/easy-scalable-text-rendering... (code: https://github.com/evanw/theta)
What a great write up of a clever technique, I don’t know much about graphics but I could follow it. I submitted to HN as I’m sure others will also enjoy it :) thanks for sharing!
Very cool! It would be fun with some kind of performance comparison against the "traditional" textured method.
As usual with modern gpu stuff for (semi-, not knocking the OP) simple stuff like this I guess the answer to "how does it perform?" is "yes". :/
The answer I look for in "how does it perform?" is "VSCode stops eating hundreds to thousands of MB of my VRAM".
Sebastian Lague has a good video covering many different font rendering techniques.
I have a similar technique before, embedding the entire font data in the fragment shader source code. Then you can use `snprintf` to directly print into a GPU buffer mapped to the CPU (this is a footgun, I know). Instead of drawing individual characters with a vertex shader, I've just drawn one full screen triangle and use `gl_FragCoord` instead of UV coordinates. Not the most efficient way of doing things but it's a debug feature and it's fast enough to be practical.
Despite what the filename says, this is using the font from the IBM PC ROM, not the NES. You can find the "NES font" and other 8x8 pixel fonts around the web.
https://github.com/rikusalminen/triangles/blob/nesfont/shade...
> You can find the "NES font" and other 8x8 pixel fonts around the web.
This is my favorite pixel font pack:
Thanks so much! I am doing something similar to OP in my game engine, but I hand-rolled a font, which is readable but ugly af.
This is going to be such a source of inspiration! Do people have favorites from the list?
Mine is IBM XGA-AI 12x20 for sure
Incoming trivia:
I finally found out recently that the "NES" font is from the 1976 arcade game Quiz Show. The font was used in other black-and-white Kee/Atari games. The font data is available in the quizshow MAME ROM set - split into nybbles for some reason.
This game was interesting - it stored question and answer data on a 8-track tape.
The font has slightly changed over the years. Notably, the "E" glyph originally had a longer lower arm, and the "!" and "?" glyphs are often different between variants of the font. Super Mario Bros also notably modifies the 8 to have a straight crossbar instead of intersecting spines.
Neat! I don't often see text-rendering algorithms I didn't try myself. I implemented several at my startup. This wouldn't have saved me because I needed resolution independence, and anti-aliasing. Also it might not generalize to all bezier curve font files. Converting curves to pixels can be hard, especially when glyphs cross over themselves. In general, it feels like standard text rendering is solved, and non-standard use-cases are brutal to attempt.
This actually seems conceptually similar to my favorite method by Will Dobbie (but much simpler). Both take raw font data and use it directly on a shader. The difference being, this method takes pixel data and stores it in arrays. Will takes svg path data and stores them as a "vector texture"
He made a cool demo, if anyone is curious: https://wdobbie.com/warandpeace/
I've thought about doing something like this before, but my understanding was that GPUs are especially efficient at rendering from textures, while being relatively slow at bit twiddling. So although you're saving a little bit of memory here, is it actually faster than having an atlas?
Maybe you could get the best of both worlds by bitpacking into a regular texture, with a fragment shader doing the decoding.
> understanding was that GPUs are especially efficient at rendering from textures, while being relatively slow at bit twiddling.
That understanding is very out of date. For GPU's made in the last 15 or so years a texture lookup is roughly 100 times as slow as a bitwise operation.
This is a very similar concept (the font encoding, not the shaders) to how it was possible to draw little 8x8 sprites in BBC Basic. Brings back memories of drawing pixels on graph paper and mental arithmetic… oph… about 35 years ago!
Honest question as I don't know almost anything about modern computer graphics: Is it so much performance penalty to upload a small texture to GPU so you can't render the whole string to the texture in 2D and just display the texture onto two triangles?
> Is it so much performance penalty to upload a small texture to GPU so you can't render the whole string to the texture in 2D and just display the texture onto two triangles?
It's not. This technique is more about getting text on the screen in the easiest way possible for debugging. You just add some data to your shader and poof, you get text.
The converse is, you write code to generate a font atlas (so more work), or go find and existing one and need to load it (so need to write loading code so more work). And/Or draw a full message into a texture (more work) and you'll need to cache that result until the message changes (more work)
On top of all of that, you need to manage resources and bind them into the system where as here no resources are needed.
but again, it's a technique for getting "debugging text" on the screen. It is not a solution for text in general.
Note that drawing text to textures is how most browsers and OSes work. They draw fonts into texture atlases dynamically (since a atlas for all of Unicode per font per size would take too much time and memory). They then use the texture atlas glyphs to make more textures of portions of the app's window. In the browser you can turn on show texture edges to see all textures. Rendering->Layer borders will outline each texture in cyan
Generally you want to avoid wasting too much memory on a GPU, even today. That large text box texture also has to go over the PCI bus which can cause stalls depending on when its uploaded and if the GPU ends up having to evict resources. If you end up having a lot of independent texture text boxes being rendered by the comparatively slower CPU, that could add up quickly and cut into your budget.
Drawing using a glyph atlas is still a way better use of resources. Modern text rendering pipelines will also often use either SDF or encoded bezier curves to improve glyph legibility when scaling which is also a great way of saving more memory.
Drawing one quad to cover N characters and picking out a glyph in the shader is going to be faster than drawing individual quads for each character (for monospace fonts, anyway). But there are only so many characters you can fill the screen with, so it's probably not a huge difference in practice.
Regarding the upload part: at the end of the day, you have X bytes of glyphs and they need to get into GPU memory somehow. Whether you get them there as textures, as uniform data or as shader constants doesn't really matter performance-wise. If anything, doing it through shader constants as described in TFA is more expensive on the CPU side, since all those constant declarations need to be processed by the shader compiler.
What does matter on the GPU side is which memory hierarchy you hit when reading glyph data (texture fetches have a dedicated L1 cache on most GPUs, larger than the "normal" L1 cache I think) and what order the data is in (textures are usually stored in some version of Morton order to avoid cache misses when you're shading blocks of pixels). For a production atlas-based text renderer you probably want to use textures.
Edit: I misread the question; you were asking about drawing individual glyphs on the GPU vs. drawing an entire block of text on the CPU, right? This is a speed/space tradeoff, the answer is going to depend on how much memory you want to blow on text, whether your text changes, whether it needs to have per-character effects applied, and so on.
You can render the entire string before upload, but then you are essentially using a CPU render, which will be slower than having the GPU do the same thing.
FWIW, this method is also a texture despite being called “texture-less”; the texture is just stored in a different format and a different place. True textureless font rendering evaluates the vector curves on the fly.
It depends on the application. It’s the easiest way especially if you might encounter right to left script, CJK or emoji. It is worthwhile to cache the textures, most text does not change every frame. It is good enough for us.
It's huge. Passing data from CPU to GPU is most likely, 90% of time, the biggest bottleneck.
Pretty confusing to say you're not going to store a bitmap in the shader... and then explain exactly how you stored a bitmap in the shader!
(TL;DR, he embeds a bitmap font in the shader.)
No, they say they are not going to store a bitmap in a texture, which is not the same thing as embedding directly in the shader code.
You could compare that to storing some data in a separate file which needs to be read during runtime versus embedding the data directly in the source code.
The bitmap absolutely is a texture in the broad sense of the word. It’s not a Vulkan texture in the sense that it doesn’t use the Vulkan texture API, but it is a texture nonetheless.
Moreover, parent’s point is double valid because of the example “Look Ma, No Font Atlas!!!” that uses a font atlas baked into shader code. I totally expected this article, based on the title, to talk about stroked font rendering, and instead it’s an article about “texture-less” textured rendering that uses a “no font atlas” font atlas.
The effect of that is that you're circumventing using hardware specialized for efficient pixel lookup in favor of using general data lookup inside the shader binary. You're saving yourself some memory due to using 1 bit per pixel rather than at least 8 (none of the major APIs expose a 1-bit texture format AFAIK so R8 would be the next best thing), but you're bound to use some extra cycles for the lookup and decoding of your embedded font.
> none of the major APIs expose a 1-bit texture format AFAIK so R8 would be the next best thing
I think the next best thing is BC4. The compressed format stores 8 bits/pixels grayscale texture compressed into 8 bytes / 4x4 pixels i.e. 4 bits/pixel, twice smaller compared to R8.
https://learn.microsoft.com/en-us/windows/win32/direct3d10/d...
While not technically misleading, I also find it a bit misleading.
When told it's going to be a "texture less text rendering", I was thinking of procedural drawing of glyphs, not embedding bitmaps in a shader instead of a texture.
He doesn't:
> Obviously, we can’t store bitmaps inside our shaders, but we can store integer constants, which, if you squint hard enough, are nothing but maps of bits. Can we pretend that an integer is a bitmap?
He seems a bit confused about what a bitmap is. There's no squinting or pretending involved here.
Memory is memory, irrespective of whether it's "code memory" or "data memory".
Back in the bad old days you could just use precompiled textures which are basically a set of memory write CPU instructions using immediate mode operands (no texture/bitmap lookup of any kind)
I suppose you could call this approach "texture as code"
Previously https://news.ycombinator.com/item?id=41993084
(tho mine was the only comment last time)
It would be interesting to compare this technique with native text rendering. How many FPS can you get when rendering a full screen of text?
On modern desktop GPUs, there’s nothing frame rate limiting in practice about the technique in the article. So much so, it probably doesn’t make sense to measure in FPS. I don’t know which native text rendering you’re thinking of, but some text rendering methods are very high quality and not super fast or optimized, and they might limit framerate if you did a full screen of text every frame. Usually in that case people will render to a texture so that it doesn’t have to be done every frame. There are also enough medium & high quality text rendering methods that are fast enough that you can do full screen text every frame without worrying about FPS at all, usually fast enough to draw text on top of the game’s 3d rendering that the text doesn’t even noticeably affect the game’s framerate.
When making my own text editor the text rendering was the bottleneck. I used web canvas fillText for text rendering. Tried to make my own text render using bitmaps but couldn't beat the "native" text rendering. So I wonder if it would get any faster with this method in webGL. For example re-parsing the whole file so that the editor can get a semantic understanding of the code - is many times faster then rendering the text. And I discovered that coding with 240 fps feels very nice! When you push a key and the you can see the result the very next frame.
Browser engines are definitely highly optimized at text rendering, and they do cache the results of text renders and turn them into bitmaps that get blitted, so it’s not surprising it can be hard to beat even by using bitmaps, especially if you let it’s caching mechanism dominate the render time. Some things the browser engines will pay close attention to is storage layout, cache alignment, and cache line size, which is a fancy way of saying that not all bitmap renders are the same; their bitmap renderer might be faster than my naive bitmap renderer if I don’t pay attention to the hardware details.
The actual rendering part of fillText() isn’t that fast though, so you probably can beat it with your own bitmaps if you were to render a different full screen of text every frame, which would undermine the bitmap caching mechanism. I’m not sure, but it might help to vary the text style every frame too; I’m not sure if the engines build font atlases on the fly… they might, in which case defeating the cache would be using enough different styles to churn through several gigabytes of image cache.
Another thing to pay attention to is whether you’re rendering text to a buffer that then needs to be copied to the frame buffer, or whether you’re rendering directly to the frame buffer. Avoiding the round-trip through memory, if you can, will be quite a bit faster.
The technique in the article might be best for things like debug text overlays on top of a game. If you need to display a handful of values and watch them change, this shader technique can be very fast.
I think SLUG does this, but professionally:
Fun, I really wish I had the ability to reason around shaders and draw calls to do things like this :-).
I find it interesting how much effort was put into getting high quality scalable vector fonts and how useless these techniques were once we got our accelerated vector graphics co-processors.
I mean, there are some very interesting projects to try and do font rendering on the graphics card, but by and large I find it funny how in general they are terrible at it.
What would a native gfx-card friendly scalable font format look like? would it just be a triangle mesh?
IIRC every possible quadratic bezier curve (the kind used in truetype) can be rendered as one triangle and the equation in barycentric coordinates is identical for all possible curves, so you can just evaluate it in the shader with no extra vertex data.
I'm pretty sure we've done this before... lol!
Please make similar in SDL
As an aside I’ve yet to come across anything more technically complete than ClearType. Bitmaps/Textures done via some janky early-2000 NeHe tutorial inspired thing aren’t even on the table. Yeah people will hate on Microsoft and Windows and bicker and whatever, I don’t care because of all the shit I’ve dealt with trying to use freetype and additional libraries, ClearType has never let me down. I’ve used D2D with D3D in conjunction with shared surfaces and other hacks to join the two, and it’s pleasant enough that the final product is well worth the programming agony.