As a painter, most of the technical details are lost on me, though I get the gist.
Looking at the results, it seems that the author defines a base color (green for the leaves) and moves up and down in brightness using what amounts to a multiplication for the shadows and an add for the lights. The result feels a bit harsh and unnatural.
I would suggest that he also plays a bit with the hue values.
1. That he darkens the lights but moves the green towards yellow.
2. That he lightens the darks, but moves the green towards blue.
As a programmer who dabbles in painting, what you are describing is indirect lighting and subsurface scattering (leaves are sort of transparent, so from the back they look yellowish, and the sky is blue, so shadows have some blue in them).
In a full rendering pipeline (not the simplified one just for the effect) this would already be implemented, but it's a good thing to remember.
To be fair to the (epic) work of the author, a spacecraft likely wouldn’t have darks off to blue. I recon balancing the color was within their means and is likely their stylistic choice.
A relevant and highly appealing shader is the 'Van Gogh painting' effect applied to the framebuffer in one quest of the the Hearts of Stone DLC for The Witcher 3. Here is a great reverse-engineering deep-dive: https://astralcode.blogspot.com/2020/11/reverse-engineering-...
About 15 years ago I was working at Te Papa, the national museum here in NZ and we had an exhibition on 'Monet and the Impressionists'.
I created a flash app where you could upload a photo and then paint an impressionist style painting from it. You could either auto generate or use your mouse to paint sections of the photo.
I only have the one screen shot of it now https://imgur.com/a/5g40UEr
If I recall correctly I I'd take the position of the mouse, plus a rough direction it was traveling in, and then apply a few random brush strokes which would have a gradient applied from the start to the end of the stroke which was sampled from the underlying photo. The strokes length would be limited if it detected any edges on the photo near to the starting point.
In the end it was only a couple of hundred lines of ActionScript but it all came together to achieve quite a neat effect.
I love this guy's blog, I just wish there was a way to follow him that wasn't twitter.
I'd love it if they had a mastodon account.
He's on blue sky https://bsky.app/profile/maxime.bsky.social
Good catch. Unfortunately my RSS app isn’t displaying the feed. First time I’ve ever encountered this. I’ve reached out to Unread app developer to see if there’s a fix.
Looks like that feed hasn't been updated in (almost exactly) a year. The four most recent blog posts are missing.
Author here
Fixed the RSS feed (and sitemap as well since both were not updated) Those scripts were silently skipped since January
Apologies for the inconvenience, it should be updated now!
LOL. I had Unread’s developer communicating with me via email. He was trying to help me troubleshoot for a day when all of a sudden magically the articles appeared. Haha. I mentioned to him maybe you saw my comment, before I saw your message. So funny. Anyways, thank you as I really wanted to follow your blog. Much appreciated.
Yeah, I just had to open twitter to be able to give him feedback, which is annoying because it's an otherwise dead platform to me.
I asked him if anyone suggested experimenting with the Symmetric Nearest Neighbor filter[0], which is similar to Kuwahara but works slightly differently. With a little bit of tweaking it can actually embed an unsharp mask-effect into itself, which may eliminate the need for a separate Sobel pass.
I did some javascript-based experiments with SNN here:
https://observablehq.com/@jobleonard/symmetric-nearest-neigh...
... and have no shader experience myself so no idea how it compares to Kuwahara in terms of performance cost (probably worse though).
This is excellent work and I adore the result, but I can’t help but feel like the effect could have been achieved with crafted texturing. If you look at Sketchfab as often as I do, there are artists achieving this kind of thing purely with textured polygons[0]. Dynamic lighting is lost with this method, but I feel like with an artist and programmer in close collaboration could find a clever way to solve that.
[0] https://sketchfab.com/3d-models/watercolor-bird-b2c20729dd4a...
Hand crafted textures are fine but so’s spending a chunk of time roughly equivalent to hand painting a few models and being able to have any arbitrary geometry rendered in a painterly fashion. You could even figure out a way to combine them, so that the computer can half-ass the parts a master painter would let apprentices handle, and then go back in and do the important parts by hand; I work in Illustrator and figuring out how to make it do half-assed, highly stylized lighting on arbitrary shapes I draw, then doing the rest myself, has been a lot of fun.
Hmm. So the point of looking "painterly" is that it looks like the art was done manually by a craftsman. But once any computer can do it, that stops working. Then you'll need to do something else.
I guess this also applies to the diffusion models. They look just as if a human did it! But as that gets more common, there will be added value in doing whatever they can't do, because only that will look truly human.
> So the point of looking "painterly" is that it looks like the art was done manually by a craftsman.
I don't know that there has to be a point to it in that sense. In general, the goal isn't to generate static images that can be passed off as paintings. It's a real-time 3D rendering technique, for one thing -- you can't paint this.
The term painterly refers to non-photorealistic computer graphics art styles that mimic some aspects of painterly paintings. The point of doing this is to (try to) make a thing (a 3D scene, generally) look the way you want it to.
> So the point of looking "painterly" is that it looks like the art was done manually by a craftsman
I think the core issue is the difference between the digital and the analog as mediums. My understanding is that the computor is not a medium in the real sense. Rather (for the most part) it is a meta-medium: a medium that impersonates other mediums. It impersonates the photographers darkroom, a typewriter, a movie editing station etc.
TFA describes a more effective way to impersonate. It is no more a replacement of a painting than a photograph of my wife is a replacement of my wife.
> But as that gets more common, there will be added value in doing whatever they can't do, because only that will look truly human.
Here I would agree. Standing in front of a real painting is an entirely different experience to looking at a digital emulation of a painting on a screen. My new problem as an art teacher is that I find myself talking to students who claim to have seen the work of (for example) Michelangelo, yet I know they have never visited Italy.
Impressive shaders, blog, website, and work. Really nice to see thought and details going into what looks like every aspect of their domain.
Been long since I've checked what one can do with shaders and this is truly impressive !
Great blog post, very inspiring.