• stephantul 12 hours ago

    Amazing post, I didn’t think this through a lot, but since you are normalizing the vectors and calculating the euclidean distance, you will get the same results using a simple matmul, because euclidean distance over normalized vectors is a linear transform of the cosine distance.

    Since you are just interested in the ranking, not the actual distance, you could also consider skipping the sqrt. This gives the same ranking, but will be a little faster.

    • qingcharles 11 hours ago

      It's stuff like this I would have loved to know when I was doing game engine dev in the 90s.

  • roskelld 8 hours ago

    I do enjoy these kinds of write ups, especially when it's about something that might seem so simple on the surface, but in order to get looking great you really have to go in deep.

    Lucas Pope did a really nice write up on how he developed his dithering system for Return of The Obra Dinn. Recommended if you also enjoyed this blog post.

    https://forums.tigsource.com/index.php?topic=40832.msg136374...

    • sph 17 hours ago

      Every example I thought "yeah, this is cool, but I can see there's space for improvement" — and lo! did the author satisfy my curiosity and improve his technique further.

      Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog

      • crazygringo 13 hours ago

        > I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.

        Not to take away from this truly amazing write-up (wow), but there's at least one generator that uses shape:

        https://meatfighter.com/ascii-silhouettify/

        See particularly the image right above where it says "Note how the algorithm selects the largest characters that fit within the outlines of each colored region."

        There's also a description at the bottom of how its algorithm works, if anyone wants to compare.

      • snackbroken 12 hours ago

        > I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.

        Acerola worked a bit on this in 2024[1], using edge detection to layer correctly oriented |/-\ over the usual brightness-only pass. I think either technique has cases where one looks better than the other.

        [1]https://www.youtube.com/watch?v=gg40RWiaHRY

        • zahlman 11 hours ago

          I can imagine there's room for "style", here, too. Just like how traditional 2d computer art varies from having thick borders and sharp delineations between colour regions, through https://en.wikipedia.org/wiki/Chiaroscuro style that seems to achieve soft edges despite high contrast, etc.

        • aleyan 8 hours ago

          Great work! While I was building ascii-side-of-the-moon [0][1] I briefly considered writing my own ascii renderer to capture differences in shade and shape of the Lunar Maria[2] better. Ended up just using chafa [3] with the hope of coming back to ascii rendering after everything is working end to end.

          Are you planning to release this as a library or a tool, or should we just take the relevant MIT licensed code from your website [4]?

          [0] https://aleyan.com/projects/ascii-side-of-the-moon

          [1] https://news.ycombinator.com/item?id=46421045

          [2] https://en.wikipedia.org/wiki/Lunar_mare

          [3] https://github.com/hpjansson/chafa

          [4] https://github.com/alexharri/website/tree/master/src

          • alexharri 6 hours ago

            The ASCII moon tool is fun to play around with!

            No plans to build a library right now, but who knows. Feel free to grab what you need from the website's code!

            If I were to build a library, I'd probably convert the shaders from WebGL 2 to WebGL 1 for better browser compatibility. Would also need to figure out a good API for the library.

            One thing that a library would need to deal with is that the shape vector depends on the font family, so the user of the library would need to precompute the shape vectors with the input font family. The sampling circles, internal and external, would likely need to be positioned differently for different font families. It's not obvious to me how a user of the library would go about that. There'd probably need to be some tool for that (I have a script to generate the shape vectors with a hardcoded link to a font in the website repository).

          • wonger_ 16 hours ago

            Great breakdown and visuals. Most ASCII filters do not account for glyph shape.

            It reminds me of how chafa uses an 8x8 bitmap for each glyph: https://github.com/hpjansson/chafa/blob/master/chafa/interna...

            There's a lot of nitty gritty concerns I haven't dug into: how to make it fast, how to handle colorspaces, or like the author mentions, how to exaggerate contrast for certain scenes. But I think 99% of the time, it will be hard to beat chafa. Such a good library.

            EDIT - a gallery of (Unicode-heavy) examples, in case you haven't seen chafa yet: https://hpjansson.org/chafa/gallery/

            • fwipsy 13 hours ago

              Aha! The 8x8 bitmap approach is the one I used back in college. I was using a fixed font, so I just converted each character to a 64-bit integer and then used popcnt to compare with an 8x8 tile from the image. I wonder whether this approach results in meaningfully different image results from the original post? e.g. focusing on directionality rather than bitmap match might result in more legible large shapes, but fine noise may not be reproduced as faithfully.

              • smusamashah 13 hours ago

                But the chafa gallery isn't showing off ascii text rendering. Are there examples that use ascii text?

                • wonger_ 12 hours ago

                  Good point. I haven't found many ascii examples online.

                  Here's a copy-paste snippet where you can try chafa-ascii-fying images in your own terminal, if you have uvx:

                    uvx --with chafa-py python -c '
                    from chafa import * 
                    from chafa.loader import Loader 
                    import sys 
                    img = Loader(sys.argv[1])
                    config = CanvasConfig() 
                    config.calc_canvas_geometry(img.width,img.height,0.5,True,False)
                    symbol_map = SymbolMap()
                    symbol_map.add_by_tags(SymbolTags.CHAFA_SYMBOL_TAG_ASCII)
                    config.set_symbol_map(symbol_map)
                    config.canvas_mode = CanvasMode.CHAFA_CANVAS_MODE_FGBG
                    canvas = Canvas(config)
                    canvas.draw_all_pixels(img.pixel_type,img.get_pixels(),img.width,img.height,img.rowstride)
                    print(canvas.print().decode())
                    ' \
                    myimage.jpg
                  
                  But results are not as good as the OP's work. https://wonger.dev/assets/chafa-ascii-examples.png So I'll revise my claim that chafa is great for unicodey colorful environments, but hand-tailored ascii-only work like the OP is worth the effort.
                • keepamovin 14 hours ago

                  my favorite ascii glyphs are the classic IBM Code Page 437: https://int10h.org/oldschool-pc-fonts/fontlist/

                  and damn that article is so cool, what a rabbithole.

                • echoangle 14 hours ago

                  Very cool effect!

                  > It may seem odd or arbitrary to use circles instead of just splitting the cell into two rectangles, but using circles will give us more flexibility later on.

                  I still don’t really understand why the inner part of the rectangle can’t just be split in a 2x3 grid. Did I miss the explanation?

                  • DexesTTP 13 hours ago

                    It's because circles allow for a stagger and overlap as shown later on. It's not really possible to get the same effect from squares.

                    • echoangle 12 hours ago

                      But it seems like you only need the stagger and overlap because you’re using circles in the first place. Would it look worse if you just divided the rectangle into 6 squares without any gaps or overlap?

                      • zestyping 12 hours ago

                        My thought exactly. The sampling circles only enable you to (awkwardly) solve a problem that was fabricated by using circles in the first place.

                        • panki27 12 hours ago

                          I wondered the same thing, but characters usually don't reach the edges, so I guess circles fit the average character better?

                      • MrJohz 9 hours ago

                        I think this is connected to the overlap and offset that are used layer to account for complex or symmetrical letter shapes. If the author had just split the grid, those effects would have been harder to achieve.

                      • greggman65 10 hours ago

                        I didn’t put nearly as much effort as this post into shape matching but I did try a few other things like

                        Non-ascii, I tried various subsets of Unicode. There’s the geometric shape area, CJK, dingbats, lots of others

                        Different fonts - there are lots of different monospace fonts. I even tried non-monospaced fonts tho still drawn in grid

                        ANSI color style https://16colo.rs/

                        My results weren’t nearly as good as the ones in this article but just suggesting more ways of exploration

                        https://greggman.github.io/doodles/textme10.html

                        Note: options are buried in the menu. Best to pick a scene other than the default

                        • mwillis 12 hours ago

                          Fantastic technique and deep dive. I will say, I was hoping to see an improved implementation of the Cognition cube array as the payoff at the end. The whole thing reminded me of the blogger/designer who, years ago, showed YouTube how to render a better favicon by using subpixel color contrast, and then IIRC they implemented the improvement. Some detail here: https://web.archive.org/web/20110930003551/http://typophile....

                          • zellyn 11 hours ago

                            +1 yo wanting to see the cognition logo with contrast. It was set up as the target, but no payoff!

                            Lovely article, and the dynamic examples are :chefs-kiss:

                          • dboon 13 hours ago

                            Fantastic article! I wrote an ASCII renderer to show a 3D Claude for my Claude Wrapped[^1], and instead of supersampling I just decided to raymarch the whole thing. SDFs give you a smoother result than even super sampling, but of course your scene has to be represented with distance functions and combinations thereof whereas your method is generally applicable.

                            Taking into account the shape of different ASCII characters is brilliant, though!

                            [1]: https://spader.zone/wrapped/

                            • alexharri 6 hours ago

                              Looks very cool! Thanks for sharing.

                              The resulting ASCII looks dithered, with sequences like e.g. :-:-:-:-:. I'd guess that it's an intentional effect since a flat surface would naturally repeat the same character, right? Where does the dithering come from?

                            • AgentMatt 14 hours ago

                              Great article!

                              I think there's a small problem with intermediate values in this code snippet:

                                const maxValue = Math.max(...samplingVector)
                              
                                samplingVector = samplingVector.map((value) => {
                                  value = x / maxValue; // Normalize
                                  value = Math.pow(x, exponent);
                                  value = x * maxValue; // Denormalize
                                  return value;
                                })
                              
                              Replace x by value.
                              • alexharri 6 hours ago

                                Just pushed a fix, should be live in a minute or two, thanks again!

                                • alexharri 12 hours ago

                                  Good catch, thanks! I’ll push a fix once I’m home

                                • LexiMax 9 hours ago

                                  Only tangentially related, but the title reminds me of hack you could do on old DOS machines to get access to a 160x100 16-color display mode on a CGA graphics adapter.

                                  The display mode is actually a hacked up 80x25 text mode. So in that specific narrow case, you have a display mode where text characters very much function as pixels.

                                  - https://en.wikipedia.org/wiki/Color_Graphics_Adapter

                                  - https://github.com/drwonky/cgax16demo

                                  • nxobject 5 hours ago

                                    I'm playing with a related problem in my spare time - braille character-based color graphics; while we have enough precision for sharp edges, the fundamental issues with color are the still the same: if we begin with a supersampling pass for assignment, we lack precision, so we may need to do some contrast fixups afterward. I think some contrast enhancement based on your sampling schemes might be useful :) Thank you so much for posting this!

                                    (I've previously tried pre-transforming on the image side to do color contrast enhancement, but without success: I take the Sobel filter of an image, and use it to identify regions where I boost contrast. However, since this is a step preceding "rasterization", the results don't align well with character grids.)

                                    • CarVac 15 hours ago

                                      The contrast enhancement seems simpler to perform with an unsharp mask in the continuous image.

                                      It probably has a different looking result, though.

                                      • thech6newshound 6 hours ago

                                        Quite amazing breakdown, thank you!

                                        I'm hoping people who harness ASCII for stuff like this consider using Code Page 437, or similar. Extended ASCII sets comprising Foreign Chars are for staid business machines, and sort of familiar but out of place accented chars have a bit of a distracting quality.

                                        437 and so on taps the nostalgia for BBS Art, DOS, TUIs scene NFOs, 8 bit micros.... Everything pre Code Page 1252, in other words. Whilst it was a pragmatic decision for MS, it's also true that marketing needs demanded all text interfaces disappeared because they looked old. Text graphics, doubly so. That design space was now reserved for functional icons. A bit of creativity went from (home) computing right there and then. Stuffing it all into a separate font ensured it died.

                                        But, that stuff is genuinely cool to a lot of people in a way VIM, (for example) has never been and nor will it ever. This is a case of Form Over Function. Foreign chars are not as friendly or fun as hearts, building blocks, smileys, musical notes, etc.

                                        • joshu 11 hours ago
                                          • Johnny_Bonk 2 hours ago

                                            Amazing post, I was able to take what you did and recreate it and have some fun, matrix green etc. Thanks for the great post

                                            • markshtat 10 hours ago

                                              Great writeup! I put together a Python CLI implementation: https://github.com/mayz/ascii-renderer

                                              Supports color output, contrast enhancement, custom charsets. MIT licensed.

                                              • ripe 4 hours ago

                                                Wonderful article and illustrations! I got sucked in by the successive disclosures of "but this is a problem, so we do that to solve it." Bravo!

                                                • symisc_devel 15 hours ago

                                                  There is already a C library that does realtime ascii rendering using décision trees:

                                                  GitHub: https://github.com/symisc/ascii_art/blob/master/README.md Docs: https://pixlab.io/art

                                                  • nowayhaze 14 hours ago

                                                    The OP's ASCII art edges look way better than this

                                                  • jrmg 15 hours ago

                                                    This is amazing all round - in concept, writing, and coding (both the idea and the blog post about it).

                                                    I feel confident stating that - unless fed something comprehensive like this post as input, and perhaps not even then - an LLM could not do something novel and complex like this, and will not be able to for some time, if ever. I’d love to read about someone proving me wrong on that.

                                                    • Lerc 12 hours ago

                                                      To develop this approach you need to think through the reasoning of what you want to achieve. I don't think the reasoning in LLMs is nonexistent, but it is certainly somewhat limited. This is disguised by their vast knowledge. When they successfully achieve a result by relying on knowledge you get an impression of more reasoning than their is.

                                                      Everyone seems now familiar with hallucinations. When a model's knowledge is lacking and it is fine tuned to give an answer. A simplistic calculation says that if an accurate answer gets you 100%, then an answer gets you 50% and being accurate gets you 50%. Hallucinations are trying to get partial credit for bullshit. Teaching a model that a wrong answer is worse than no answer is the obvious solution, turning that lesson into training methods is harder.

                                                      That's a bit of a digression but I think it helps explain the difference to why I think a model would find writing an article like this.

                                                      Models have difficulty in understanding what is important. The degree to which they do achieve this is amazing, but it is still trained on data that heavily biases their conclusions to the mainstream thinking. In that respect I'm not even sure if it is a fundamental lack in what they could do. It seems to be that they are implicitly made to think of problems as "it's one of those, I'll do what people do when faced with one of those"

                                                      There are even hints in fiction that this is what we were going to do. There is a fairly common sci-fi trope of an AI giving a thorough and reasoned analysis of a problem only to be cut off by a human wanting the simple and obvious answer. If not done carefully RLHF becomes the embodiment of this trope in action.

                                                      This gives a result that makes the most people immediately happy, without regard for what is best long term, or indeed what is actually needed. Asimov explored the notion of robots lying so as to not hurt feelings. Much of the point of the robot books was to express the notion that what we want AI to be is more complicated than it appears at first glance.

                                                      • cryptonector 28 minutes ago

                                                        This. With good prompting you can get Opus 4.5 to do amazing things, but you have to know what you're doing -- it has to be the case that you could have implemented everything that Claude will do for you, and that what Claude is doing more than anything is a) go faster, b) be your well-read rubber ducky.

                                                      • soulofmischief 8 hours ago

                                                        I'm confident that they can. This isn't a new idea. Something like this would be a walk in the park for Opus 4.5 in the right harness.

                                                        Of course it likely still needs a skilled pair of eyes and a steady hand to keep it on track or keep things performant, but it's an iterative process. I've already built my own ASCII rendering engines in the past, and have recently built one with a coding model, and there was no friction.

                                                        • teiferer 7 hours ago

                                                          > skilled pair of eyes and a steady hand

                                                          But that's key here.

                                                          "A hammer and a chisel can build a 6ft wooden sculpture by themselves just fine .. as long as guided by a skilled pair of eyes and steady hands"

                                                          • soulofmischief 7 hours ago

                                                            Ok, but if you have a wooden hammer and chisel, and a steel hammer and chisel, choosing the wooden one is an artisanal choice, not a practical one. These tools enable an amount of velocity I've never had before, both in research and development.

                                                      • aghilmort 7 hours ago

                                                        really great! adjacent well-done ASCII using Braille blocks on X this week:

                                                        nolen: "unicode braille characters are 2x4 rectangles of dots that can be individually set. That's 8x the pixels you normally get in the terminal! anyway here's a proof of concept terminal SVG renderer using unicode braille", https://x.com/itseieio/status/2011101813647556902

                                                        ashfn: "@itseieio You can use 'persistence of vision' to individually address each of the 8 dots with their own color if you want, there's some messy code of an example here", https://x.com/ashfncom/status/2011135962970218736

                                                        • nomel 7 hours ago

                                                          It would be interesting to see how things changed if you included extended ascii characters [1], which were widely used for ascii UI.

                                                          [1] https://www.lookuptables.com/text/extended-ascii-table

                                                          • alexharri 6 hours ago

                                                            I did actually try out various alphabets e.g. Cyrillic, Greek and symbols (e.g. box drawing symbols), but ended up removing them: https://github.com/alexharri/website/commit/d969ef839

                                                            Using only ASCII felt more in the "spirit" of the post and reduced scope (which is always good)

                                                          • chrisra 16 hours ago

                                                            > To increase the contrast of our sampling vector, we might raise each component of the vector to the power of some exponent.

                                                            How do you arrive at that? It's presented like it's a natural conclusion, but if I was trying to adjust contrast... I don't see the connection.

                                                            • c7b 15 hours ago

                                                              What about the explanation presented in the next paragraph?

                                                              > Consider how an exponent affects values between 0 and 1. Numbers close to experience a strong pull towards while larger numbers experience less pull. For example 0.1^2=0.01, a 90% reduction, while 0.9^2=0.81, only a reduction of 10%.

                                                              That's exactly the reason why it works, it's even nicely visualized below. If you've dealt with similar problems before you might know this in the back of your head. Eg you may have had a problem where you wanted to measure distance from 0 but wanted to remove the sign. You may have tried absolute value and squaring, and noticed that the latter has the additional effect described above.

                                                              It's a bit like a math undergrad wondering about a proof 'I understand the argument, but how on earth do you come up with this?'. The answer is to keep doing similar problems and at some point you've developed an arsenal of tricks.

                                                              • finghin 15 hours ago

                                                                In general for analytic functions like e^x or x^n the behaviour of the function on any open interval is enough to determine its behaviour elsewhere. By extension in mathematics examining values around the fundamental additive and multiplicative units \{ 0, 1 \} is fruitful in illustrating of the quintessential behaviour of the function.

                                                            • Izkata 10 hours ago

                                                              I dunno, going to the last example at the bottom of the page and comparing the contrast slider all the way up and all the way down, all these enhancements combined turns it into a blurry mush where it's harder to distinguish the shapes. It's the exact same problem I had with anti-aliasing fonts on older monitors (smaller resolutions) and why I always disabled it wherever I could.

                                                              • baud9600 7 hours ago

                                                                This is such a great article!

                                                                I found myself thinking, “I wonder if some of this could be used to playback video on old 8-bit machines?” But they’re so underpowered…

                                                              • nickdothutton 16 hours ago

                                                                What a great post. There is an element of ascii rendering in a pet project of mine and I’m definitely going to try and integrate this work. From great constraints comes great creativity.

                                                                • cjlm 7 hours ago

                                                                  Very impressive blogpost. No wonder it took 6 months. Makes me think I need to step up the game with my photo ASCII art compositor, printscii.com

                                                                  • eerikkivistik 14 hours ago

                                                                    It reminds me quite a bit of collision engines for 2D physics/games. Could probably find some additional clever optimisations for the lookup/overlap (better than kd-trees) if you dive into those. Not that it matters too much. Very cool.

                                                                    • Sesse__ 15 hours ago

                                                                      I did something very similar to this (searching for similar characters across the grid, including some fuzzy matching for nearby pixels) around 1996. I wonder if I still have the code? It was exceedingly slow, think minutes for a frame at the Pentiums of the time.

                                                                      • shiandow 15 hours ago

                                                                        I'm not sure if this exponent is actually enhancing contrast or just fixing the gamma.

                                                                        • Jyaif 17 hours ago

                                                                          It's important to note that the approach described focuses on giving fast results, not the best results.

                                                                          Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.

                                                                          This is a well known problem because early computers with monitors used to only be able to display characters.

                                                                          At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?

                                                                          And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.

                                                                          • alexharri 15 hours ago

                                                                            Yeah, this is good to point out. The primary constraint I was working around was "this needs to run at a smooth 60FPS on mobile devices" which limits the type and amount of work one can do on each frame.

                                                                            I'd probably arrive at a very different solution if coming at this from a "you've got infinite compute resources, maximize quality" angle.

                                                                            • brap 16 hours ago

                                                                              You said “best results”, but I imagine that the theoretical “best” may not necessarily be the most aesthetically pleasing in practice.

                                                                              For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.

                                                                              • Dylan16807 6 hours ago

                                                                                > limiting output to a small set of characters gives it a more uniform look which may be nicer

                                                                                And in the extreme that could totally change things. Maybe you want to reject ASCII and instead use the Unicode block that has every 2x3 and 2x4 braille pattern.

                                                                              • spuz 16 hours ago

                                                                                Thinking more about the "best results". Could this not be done by transforming the ascii glyphs into bitmaps, and then using some kind of matrix multiplication or dot production calculation to calculate the ascii character with the highest similarity to the underlying pixel grid? This would presumably lend itself to SIMD or GPU acceleration. I'm not that familiar with this type of image processing so I'm sure someone with more experience can clarify.

                                                                                • finghin 16 hours ago

                                                                                  In practice isn’t a large HashMap best for lookup, based on compile-time or static constants describing the character-space?

                                                                                  • spuz 16 hours ago

                                                                                    In the appendix, he talks about reducing the lookup space by quantising the sampled points to just 8 possible values. That allowed him to make a look up table about 2MB in size which were apparently incredibly fast.

                                                                                    • finghin 16 hours ago

                                                                                      I've been working on something similar (didn't get to this stage yet) and was planning to do something very similar to the circle-sampling method but the staggering of circles is a really clever idea I had never considered. I was planning on sampling character pixels' alignment along orthogonal and diagonal axes. You could probably combine these approaches. But yeah, such an approach seemed particularly powerful for the reason you could encode it all in a table.

                                                                                  • Sharlin 16 hours ago

                                                                                    And a (the?) solution is using an algorithm like k-means clustering to find the tileset of size k that can represent a given image the most faithfully. Of course that’s only for a single frame at a time.

                                                                                  • maxglute 13 hours ago

                                                                                    Mesmerizing, the i, ! shading is unreasonably effective.

                                                                                    • pcj-github 7 hours ago

                                                                                      Nice work! ASCII rendering will never be the same, in a good way.

                                                                                      • _blk 2 hours ago

                                                                                        Wow. Pretty cool. Now just replace the characters with the set from the Matrix and swallow the blue pill.

                                                                                        • fragmede an hour ago

                                                                                          very cool. I may have to look a bit closer at the pipeline I used to create the art at ssh funky.nondeterministic.computer. The graphics could always be improved, however I will note that it needs color for best effect.

                                                                                          • estimator7292 12 hours ago

                                                                                            Those 3D interactive animations are the smoothest 3D rendering I've ever seen in a mobile browser. I'm impressed

                                                                                            • nathaah3 17 hours ago

                                                                                              that was so brilliant! i loved it! thanks for putting it out :)

                                                                                              • mark-r 12 hours ago

                                                                                                This is something I've wanted to do for 50 years, but never found the time or motivation. Well done!

                                                                                                • jurf 8 hours ago

                                                                                                  This at the same time super cool and really disappointing, as I've been carrying around this idea in my head for maybe ten years as a cool side project and never got around to implementing it.

                                                                                                  However, there might still be room for competition, heh. I always wanted to do this on the _entirety_ of Unicode to try getting the most possible resolution out of the image.

                                                                                                  • adam_patarino 16 hours ago

                                                                                                    Tell me someone has turned this into a library we can use

                                                                                                    • alexharri 15 hours ago

                                                                                                      Author here. There isn't a library around this yet, but the source code for the blog is open source (MIT licensed): https://github.com/alexharri/website

                                                                                                      The code for this post is all in PR #15 if you want to take a look.

                                                                                                      • nathell 16 hours ago

                                                                                                        Well there's aalib and libcaca, but I'm not sure about their fidelity compared to this.

                                                                                                        • minimaxir 11 hours ago

                                                                                                          I was investigating a fun webcam-to-ASCII project so now I am tempted to take an approach at porting the logic from the blog post into something reusable.

                                                                                                          • minimaxir 5 hours ago

                                                                                                            Update: I tested a port of the OP's methodology using Claude Code/Claude Opus 4.5 with some specific performance optimizations, and per the benchmarks, converting a 1024x1024 image to ASCII takes 16 microseconds. I suspect that will decrease after some more polish/iteration but that's enough for potentially real-time generation even on mobile hardware.

                                                                                                            • BobbyTables2 an hour ago

                                                                                                              That doesn’t seem right.

                                                                                                              Surely you mean 16 milliseconds ?

                                                                                                              • minimaxir an hour ago

                                                                                                                Benchmark says 15.654 µs. Rendering the text as a 1024x1024 image is 2.8737 ms.

                                                                                                                However, the ASCII output quality is nondiverse despite using the same technique, so will need to do significantly more testing and this likely won't be released soon.

                                                                                                          • guerby 16 hours ago

                                                                                                            Don't know what algorithm are used by the famous libcaca:

                                                                                                            https://github.com/cacalabs/libcaca

                                                                                                          • chikna 7 hours ago

                                                                                                            This is fantastic

                                                                                                            • octoberfranklin 10 hours ago

                                                                                                              Application error: a client-side exception has occurred (see the browser console for more information).

                                                                                                              • AI-love 8 hours ago

                                                                                                                Holy moly, i Love this kinds of work

                                                                                                                • steve1977 14 hours ago

                                                                                                                  Thanks! This article put a genuine smile on my face, I can still discover some interesting stuff on the Internet beyond AI slop.

                                                                                                                  • zdimension 15 hours ago

                                                                                                                    Well-written post. Very interesting, especially the interactive widgets.

                                                                                                                    • nurettin 11 hours ago

                                                                                                                      I love that they don't just work on the edges and declare their work complete. No, shadows also have to be perfect!

                                                                                                                      Reminds me of this underrated library which uses braille alphabet to draw lines. Behold:

                                                                                                                      https://github.com/tammoippen/plotille

                                                                                                                      It's a really nice plotting tool for the terminal. For me it increases the utility of LLMs.

                                                                                                                      • blauditore 15 hours ago

                                                                                                                        Nice! Now add colors and we can finally play Doom on the command line.

                                                                                                                        More seriously, using colors (not trivial probably, as it adds another dimension), and some select Unicode characters, this could produce really fancy renderings in consoles!

                                                                                                                        • krallja 13 hours ago

                                                                                                                          "finally"? We were playing Quake II in AAlib in 2006. https://www.jfedor.org/aaquake2/

                                                                                                                          • jrmg 15 hours ago

                                                                                                                            At least six dimensions, right? For each character, color of background, color of foreground, and each color has at least three components. And choosing how the components are represented isn’t trivial either - RGB probably isn’t a good choice. YCoCg?

                                                                                                                          • lysace 12 hours ago

                                                                                                                            Seems like stellar work. Kudos.

                                                                                                                            I am however am struck with the from an outsider POV highly niche specific terminology used in the title.

                                                                                                                            "ASCII rendering".

                                                                                                                            Yes, I know what ASCII is. I understand text rendering in sometimes painful detail. This was something else.

                                                                                                                            Yes, it's a niche and niches have their own terminologies that may or may not make sense in a broader context.

                                                                                                                            HN guidelines says "Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize."

                                                                                                                            I'm not sure what is the best course of action here - perhaps nothing. I keep bumping into this issue all the time at HN, though. Basically the titles very often don't include the context/niche.

                                                                                                                            • chrisra 16 hours ago

                                                                                                                              Next up: proportional fonts and font weights?

                                                                                                                              • finghin 15 hours ago

                                                                                                                                I had been thinking of messing around with a DOM-based ‘console’ in Tauri that could handle a lot more font manipulation for a pseudo-TUI application similar to this. It's definitely possible! It would be even simpler to do in TS.

                                                                                                                              • monitron 9 hours ago

                                                                                                                                > The image of Saturn was generated with ChatGPT.

                                                                                                                                Wait...wh...why?!? Of all the things, actual pictures of the planet Saturn are readily available in the public domain. Why poison the internet with fake images of it?

                                                                                                                                • dang an hour ago

                                                                                                                                  "Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."

                                                                                                                                  "Eschew flamebait. Avoid generic tangents."

                                                                                                                                  https://news.ycombinator.com/newsguidelines.html

                                                                                                                                  • pjc50 7 hours ago
                                                                                                                                    • taneq 4 hours ago

                                                                                                                                      How can planets be real if our eyes aren’t real?

                                                                                                                                    • userbinator 7 hours ago

                                                                                                                                      More like, why have it regurgitate something likely to have been in its training data?

                                                                                                                                      • echelon 7 hours ago

                                                                                                                                        > > The image of Saturn was generated with ChatGPT.

                                                                                                                                        > Wait...wh...why?!?

                                                                                                                                        It has just begun. Wait until nobody bothers using Wikipedia, websites, or even one day forums.

                                                                                                                                        This is going to eat everything.

                                                                                                                                        And when it's immediate to say something like, "I need a high contrast image of Saturn of dimensions X by Y, focus on Saturn, oblique angle" -- that's going to be magic.

                                                                                                                                        We'll look at the internet and Google like we look at going to the library and grabbing an encyclopedia off the shelves.

                                                                                                                                        The use of calculators didn't kill ingenuity, nor did the switch to the internet. Despite teachers protesting both.

                                                                                                                                        Humans will always use the lowest friction thing, and we will never stop reaching for the stars.

                                                                                                                                        • taneq 4 hours ago

                                                                                                                                          I’ve been having The Talk with my kids recently. They’ll say “I looked up this question and the answer was X.” And I’ll ask “was that answer on a credible website, or was it an AI summary?” And then explain, again, that LLMs are great at producing plausible sounding explanations for things, but that you have to ground-truth anything that they tell you if it’s important that it’s correct.

                                                                                                                                          • leptons 2 hours ago

                                                                                                                                            Some countries are banning social media for teenagers, but they really should be banning "AI" all teenagers. Most adults can't even be trusted with asking an "AI" about anything, so children are going to have a very warped world view the more they interact with "AI". The tech really is not ready for prime time.

                                                                                                                                          • awesome_dude 5 hours ago

                                                                                                                                            I, for one, have been hoping that AI slop would cause people to be a LOT more cynical about the information they get (from the internet in particular, but from any source in general)

                                                                                                                                            But it's not happened yet

                                                                                                                                            • echelon 4 hours ago

                                                                                                                                              What statistical measures of "people" are you doing to measure this? How can you be sure nothing has changed?

                                                                                                                                              Anecdotally, I'm seeing a lot of "it looks like AI" comments on photos and videos now. That's the new "is it Photoshop?"

                                                                                                                                              I'd hold off on judgment until we get population studies on this.

                                                                                                                                              • awesome_dude 4 hours ago

                                                                                                                                                What statistical measures i use to measure is of what importance to you?

                                                                                                                                                I haven't presented a measurement, just an expectation.

                                                                                                                                                • fc417fc802 2 hours ago

                                                                                                                                                  You said it hasn't happened yet and he's asking how you arrived at that conclusion.

                                                                                                                                        • jwr 11 hours ago

                                                                                                                                          Hmm. This renderer is impressive. Will it be available for toy projects? (such as an online page with JavaScript for converting family pictures)