Maybe its me and I am more than okay but I went into a whole license rabbit hole trying to figure out the license that it uses
It uses this "personal use zlib license" And So earlier it was actually licensed under the zlib license which I think of as in something similar to the MIT license (I think, I am not a lawyer)
My issue with this is that the personal use zlib license to me feels like its made up by the author, and that you need to contact the author for a commerical license?
At this point, he should've just used something like a dual license with AGPL + commerical license.
Honestly, I get it, I also wish that there was some osi compliant that made open source make sense as a developer as open source is a really weak chain in this economy and I get it, but such licenses basically make your project only source available.
I have nothing wrong with that and honestly just wanted this to be discussed here. I had a blast looking at all the licenses in wikipedia or opensource.com website. Artistic license seems really cool if you want relicense or something. I am looking more into it. I genuinely wish if something like sspl could've been considered open source as it doesn't impact 90% of users and only something like aws/big tech.
Yes, ad-hoc licenses are a big red flag. What counts as "personal use"? Does that mean "non-commercial", or that you can only use it for yourself and cannot distribute software made with it? If someone distributes free open source software, would that count as personal use? Or would the end-users of that software be restricted from using it for whatever counts as non-personal-use due to this dependency?
Sorry to be blunt, that C/C++ code is crustyyy. Just to start with run this through an decently complete clang-tidy profile and fix all errors and warnings. That should become part of your build. The longer you hold out on that the harder it will be. Your code has no provisions to handle fuzzed/corrupted content. Maybe a better idea would be to switch this entire code base over to Rust.
Consider modern C++ practices as outlined here: https://github.com/cpp-best-practices/cppbestpractices/blob/...
Congratulations!
Can you say how the it's similar and how it's different to superficially similar sounding work?
(1) https://github.com/linebender/vello , dual Apache/MIT, by Raph Levien et al
(2) https://sluglibrary.com/ , proprietary, by Eric Lengyel (Terathon)
Slug is primarily designed for text rendering.
Vello is general purpose, like Rasterizer, but is based on GPU compute. Rasterizer uses the 'traditional' GPU pipeline. Performance numbers for both look very competitive, although Vello seems to have issues with GPU lock up at certain zoom scales. Rasterizer has been heavily tested with huge scenes at any scale.
Thank you very much for that concise explanation.
Stupid question: Why not use the hardware (fixed function) rasterizer inside the GPU? I guess that one is only optimized for triangles.
The longer answer is that using straight-edged geometry to represent curves is a resolution-dependent operation, e.g. a full screen circle may need to be flattened to an 80-sided polygon.
Rasterizer can solve quadratic curves in the fragment shaders, which massively reduces the geometry needed for a scene.
Also, the native rasterizer only supports MSAA, which is inferior to reference analytic area AA.
Good question and answer ;-)
I am still endlessly fascinated by how modern GPUs can handle stuff like UE5, raytraced lumin (whatever) but still struggle to do 2D rendering. I mean I get it logically, but it just feels so disconnected.
Always neat to see this kind of stuff however. Very cool.
The core problem is path winding: https://en.wikipedia.org/wiki/Winding_number
Paths can be any size, and the problem is hard to parallelize. GPUs like stuff broken into small regular chunks.
TestFiles/Manchester_Union_Democrat_office_1877.svg is composed of a single huge path, which was a great torture test.
I've been working on this problem on and off for over 10 years.
AMA ;-)
Nice work! I’d love to hear about your goals with this project, do you want to get it into PDF or browser engines, or anything like that?
Mark Kilgard surveyed some path rendering engines with a few curves that most have trouble with. It’d be fun to see how Rasterizer stacks up, and perhaps a nice bragging point if you dominate the competition. https://arxiv.org/pdf/2007.12254
Having used the quadratic solver code that you found on ShaderToy (sqBezier), you might be able to shave some cycles by doing a constant-folding pass on the code. It can be simplified from the state it’s in. Also the constant in there 1.73205 is just sqrt(3), and IMO it’s nicer to see the sqrt(3) than the constant, and it won’t slow anything down, the compiler will compute the constant for you. Also, it might be nice to additionally link to the original source of that code on pouet.
Are there any culling optimizations for unseen elements when layering SVG images? Looks like this isn’t an optimization that comes out-of-the-box with OpenVG and all the major web browsers needed to add this, so wondering what your solution is doing.
The depth buffer is used for opaque path interiors.
This project made me happy. You mentioned Flash as an inspiration. Have you looked at how Ruffle is handling vector art?
From what I recall they are converting it to triangles. Your solution (curves in the shaders?) seems both cheaper and more accurate, so I'm wondering if they could use it!
Computer graphics is not my field, so forgive my ignorance, but could you explain the pros and cons of your "traditional graphics pipeline" approach and more "modern" approaches (I'm guessing that means doing most of the work in shaders)? At the moment, and looking towards the future, do you think hardware (mobile and mainstream desktop) will continue to support your approach just as well, or are shader-based approaches (I guess using multi-channel SDFs) likely to gain the upper hand at some point?
The "traditional graphics pipeline" approach was chosen to maximise the platforms Rasterizer could run on, as GPU compute support at the time was patchy. Compute is now more universal, so Rasterizer could move that way.
SDFs are expensive to calculate, and have too many limitations to be practical for a general-purpose vector engine.
You mentioned winding numbers in a couple of places
Wouldn't it make sense to do a "first pass" and eliminate paths that intersect themselves? (by splitting them into 2+ paths)
I never understood why these are supported in the SVG spec.
It seems like a pathological case. Once self-intersecting paths are eliminated the problem gets simpler.. no?
Or would a CPU pass be cheating?
The Rasterizer algorithm handles self-intersecting paths without issue. Removing them requires expensive and complex computation geometry.
In a figure-8 path where the intersection is in the center of a pixel, does Rasterizer set that pixel to 0.5 or to 0?
Have you compared performance of your solution to rive?
No. I'm hoping someone else will do that ;-)
How would you break down the problem with rendering 2D to someone who has no rendering background? Is there a single large issue or is it a complex multi-issue endeavour?
Winding numbers are easy to explain, but hard to compute efficiently: https://en.wikipedia.org/wiki/Winding_number
Why did you release it only for personal use? Any chance for an OSI-approved license?
So what part of the SVG-spec does this implement? Does it support text? Gradients? Filters like blur and dropshadow?
None of that yet. The underlying SVG library, nanosvg, is very simple with no text support.
The first priority was to solve paths to pixels efficiently, including text (50,000 glyphs @ 60fps).
Gradients will be added when time allows, as I have code from a previous engine.
The coverage algorithm can be extended to support cheap box blurs, which could be used for drop shadows.
Ah ok cool! So what path to this being used in applications do you have in mind? Are you hoping to implement a decent part of the spec in the coming years? Do you have a rough timeline?
Rasterizer excels at animation and complex scenes, e.g. 2D CAD documents. The original inspiration was Flash, as I love innovative design tools. Flash 1.0 could easily be used by designers, but ultimately lost its way became a coder's toy after the Adobe acquisition and ActionScript 2.0.
Fleshing out the spec is planned, but I cannot provide a timeline as this has all been done at my own considerable expense. Maybe if my tips grow: https://paypal.me/mindbrix
What magic makes the zooming work?
No magic, just 10+ years of experimentation and optimisation.
I'm always interested in new 2D vector rendering algorithms, so if you make a blog post explaining your approach, with enough detail, I'd be happy to read it!
Thanks for the feedback. I will work on one.
I recall many years ago nvidia released an openGL extension for rasterizing vectors/svg type stuff. A bit of googling found this from 2011:
https://developer.download.nvidia.com/assets/gamedev/files/N...
and this:
https://developer.nvidia.com/nv-path-rendering-videos
The faq points 30 & 31 say it use multisampling (up to 32 samples per pixel) for AA, and the winding at each sample is calculated analytically from the curve.
From other searching, it seems no other vendor supports that extension.
My original target was iOS, so this was unavailable. I did experiment with their algorithm, but 4x MSAA was slow and had poor quality compared to Core Graphics, which was my reference.
Nice work,
without having looked at your particular shader code, I can only imagine the horrors and countless of hours that went into writing and debugging the shader code...
Which OpenGL and GLSL versions are you targeting?
I've been thinking about possibly prototyping integrating an SVG renderer into my game engine that would rasterize the textures from .svg files on content load. Would offer some benefits of improved packing and better scaling and resolution independence. Using an GPU based solution would offer the benefit of being able to skip the whole "rasterize on CPU then upload" dance but just rasterize directly into a texture render target in some FBO and then use the texture later on. That being said CPU based solution is definitely easier and more bullet proof ;-)
Thanks, you imagine correctly ;-)
The current version uses Metal. I haven't even considered GPU ports yet, as my methodology is to get it working well on one platform first.
For single-pass SVG --> texture, a CPU approach would probably offer the lowest latency. For repeat passes, the GPU would probably win.
I find that caching of renders brings orders of magnitude higher performance when using the native iOS / macOS canvas, however I get the feeling that there is really a limit to the direct drawing performance possible with a canvas API. I haven’t tried this yet so I’d be happy to be surprised.
Drawing cached renders in quads is how Core Animation works, but they do not scale well. You are ultimately limited by the CPU. Rasterizer enables a fully-animated canvas @ 60fps.
Great to see you're still going Nigel! Is this library the latest iteration of your original VectorGL project?
Hi Steve. Yes, it is! 10+ years in the making.
Wow, that is an amazing piece of work, more so because it is so small. You should probably be more explicit about commercial licensing so companies that want to use it know where and how to contact you.
My email address is in every file ;-)
Nice :)
If this was to cone to Linux what would you recommend, OpenGL, Vulkan or something else ?
Thanks. I couldn't really say without doing proper research.
This looks juicy! :)
@mindbrix does it blend colors in linear space/are colors linearized internally?
Once you do color space correct color mixing you realize that people actually expect the wrong result, i.e. the result you get from mixing in sRGB and then you have to make the rendering output visually more "correct" by making it incorrect. One of the cases where the correct computation is the wrong answer.
Heck, that's what people expect with CSS for example.
My reference renderer has been Core Graphics. If it looks like CG I assume it's OK - and if it's not OK I'm in good company ;-)
You can switch between CG and Rasterizer in the demo app using the 0 key to see the difference for yourself.
Linear at present.
Interesting, how does this comparento the Rive renderer?
I can't really answer that in detail yet. I suspect Rasterizer will be faster for complex scenes.
Thanks for the answer. You could make it render a rive asset for one-on-one comparison. ;)