A couple of suggestions.
A clearer mechanism to indicate when a dragged component will fit into a cell of another node. Quite often when you drag the highlighted drop area is obscured by the node you are dragging.
pow(a,b) and abs(b), max(a,b), min(a,b) nodes would add a lot of functionality.
Then you could do something like.
(((1 / abs((((0 - (y-0.5)) * 6) - ((sin(((((x-0.5) * 6) + time) * ((cos(time) / 3) + 2))) / 4) + pow(((x-0.5) * 2), 3))))) / 9) + (0 - pow((1 - min(abs((x-0.5), abs((y-0.5))), pow(9, 9))))
Which should generate a field with axis lines in negative numbers and a plot of x^3 modulated with a slight wobbly sine wave as positive numbers.as seen at https://c50.fingswotidun.com/show/?code=10v-6*u6*t%2Btc3%2F2...
(note - in my thing u and v are shorthand values for 0.5-x and 0.5-y respectively)
A perlin noise can add a lot too but not as trivial to add. I have a compact perlin generator in the first few lines of https://c50.fingswotidun.com/fastStackie.js which uses hashing to generate indexed random numbers, rather than a lookup table that sometimes gets used for Perlin. Feel free to use the code if you understand any of it.
It's super awesome of you to take the time to provide detailed feedback for a random stranger's project - good on ya!
A single suggestion:
Try using "the critique sandwich" when commenting on people's hard work - can help them appreciate the effort you've put in rather than feeling too defensive
I don’t really see how Lerc’s comment would benefit from the critique sandwich (more commonly called the “shit sandwich”, in my experience).
Lerc was providing thoughtful suggestions for features to add. That’s not a criticism of what’s already in the project.
Based on my comment's negative score it would seem you're not alone.
I agree the feedback was thoughtful (as acknowledged in my reply) and read no ill-will in it and meant none myself
At risk of flogging a dead horse, it can be nerve wrecking to share things (or raise your first real PR, whatever) so I try to respond with that in mind - especially since text-only comms can come off as harsher than intended.
> Based on my comment's negative score it would seem you're not alone.
It's because you didn't finish it with a positive.
That's a fair point. I think I do that more often than not, I was a bit rushed while posting that.
I certainly didn't want to send the message 'Your thing sucks', I appreciate all these weird projects that people do.
In fairness, the only reason it stuck out for me is I do the exact same thing
And I meant what I said; epically thoughtful and cool feedback that I hope the creator sees
I thought about it and added something positive to my comment which was more of a bug fix than a feature request. Maybe it was an open face sandwich.
> open face sandwich
This is gold :chefkiss:
This reminds me of node-based compositing, which is mostly standard in the film industry but, to the best of my knowledge, never made it to still-image editing applications. After doing things the nodal way, it’s hard for me to use Photoshop nowadays and have to bake-in certain changes.
There are actually a number of more-or-less node/graph-based image editors:
# chaiNNer
Fully node-based image processing.
https://github.com/chaiNNer-org/chaiNNer
# vkdt
Node-based raw and video editor. Sort of the evolution of darktable, by the original developer.
https://jo.dreggn.org/vkdt/readme.html
# darktable
The graph is strictly linear (single input, single output), but you can change the order of the processing and insert new nodes as you want.
Pretty sure there are some others, but those are the three I can remember right now.
Oldie but goodie is Nodebox[0] and for data-driven/dynamic compositing see Cables.gl[1]. Kirell Benzi's work uses the latter and is nothing short of breathtaking[2].
[1]: https://cables.gl/
Natron is also pretty well-known, I think?
Are you asking if it's well known? I don't think it is actually used in any context, it is so unstable that if you start it and do nothing, it will crash in a few seconds.
https://graphite.rs/ (still early in development) offers node-based editing, you may be interested in trying it.
Indeed, it's weird that nobody has brought node-based editing to regular image manipulation before our project, but that's our goal with Graphite. With the equally important goal of making the node aspect optional for users by building such capable tooling that it can abstract away the node graph for all conventional image editing use cases, allowing users to work purely with the WYSIWYG tools that secretly manage the node graph behind the scenes. Until they're proficient enough with the traditional tools to start learning the power of node-based editing.
That said, we've been building our engine so far with a focus mostly on vector editing. There are some raster nodes but they're pretty limited right now (no GPU support yet, more engineering to do until that's production-ready). The raster compositing and image manipulation nodes + tools + infrastructure will be a big part of the focus for next year on the roadmap (https://graphite.rs/features/#roadmap).
I wonder, where do SVG filters fall on the vector/raster spectrum? I really like that I can tune them hands-on in Inkscape (e.g. fractal noise + displacement map), and then use it anywhere that supports SVG. A little interactive demo from a while back:
[0]: https://observablehq.com/@dleeftink/svg-workbench#options
We will have a large variety of filters and a subset of them will be implementations of all the SVG filters. Separate to our regular raster render process that's used for displaying content to the screen, we'll have an SVG render process used for exporting the graph data to an SVG file. That process will be specifically designed to preserve the purity of the graph data representation such that it encodes all possible operations in the SVG format directly, and resorts to progressive degradation as required to approximate things SVG can't natively represent. That might mean rasterizing some unsupported raster filters, but the ones SVG supports would be used natively.
I don't think Graphite is the first. Gimel and GIE already exist, and I think there are other more obscure ones.
First in the sense that there's nothing in the industry that's a real product. Certainly there are various experimental concepts people have made as a hobby in a limited capacity, but they don't really count as generally useful tool suites. There's nothing even from the commercial software side of the industry either, which I find surprising. But it gives our project an advantage.
Wow, looking at the demos on the website, I am insanely impressed at just how fast the editor loads into them, and just how snappy the procedural editing is, on my mid-range smartphone no less. That's genuinely inspiring! As someone who this week has had the itch to A) learn rust, B) use webassembly for something, and C) pick up Svelte for something, this was a really cool thing to see this morning :)
Thanks, we'd love to assist you in getting involved with contributing to our project! It's something we take pride in, making it more accessible than most other open source projects to begin contributing to. Come join our Discord (link is on the website home page) and introduce yourself like in your comment here. Cheers!
Could be a Photoshop killer.
I believe that's plausibly within the range is possible outcomes— which isn't true for any other project like Gimp, which has had its window of opportunity rise and then set forever.
[dead]
I’ve been interested in those “boxes and lines” frameworks for a long time. For instance numerous data transformation tools like Alteryx and KMIME and also LABView.
Node-based systems are used in music for synthesis and effects extensively and have been so since it was feasible to process digital audio in real time. In the 60s electronic music pioneers put together analog oscillators on a patchboard. Today musicians do the same in a screen with digital operators that are accurate and stable enough to build systems (like the Yamaha DX7) that couldn’t really be built from analog parts.
It is clear how to write a compiler for that kind of graph and probably less struggle than manually writing a “shader function”
int16 amplitude(int32 time)
that composes a number of large-scale functions (resample, decimate, reverb, add, …) that are implemented using various strategies. Operator graphs can be compiled to target CPU, SIMD, DSP, MPI, GPU, Spark, etc.The dominant paradigm in graphics is still shader programs, however.
Quite a different problemspace, but there is:
https://github.com/derkork/openscad-graph-editor
which allows programmatic 3D modeling using nodes/wires. It exposes _all_ of OpenSCAD (last I checked) and is quite extensible (I use it to control a Python-enabled version of OpenSCAD https://pythonscad.org/ in an effort to make DXFs and G-code: https://github.com/WillAdams/gcodepreview )
Most of what people do in photoshop is easier in a node based system anyway since you can made a non destructive graph of operations. The biggest downside is that the best program is nuke, which is expensive, but anyone can use fusion for free or pay a few hundred dollars for a lower tier of houdini and use those image manipulations.
Houdini also allows vex shaders out of nodes, which is basically a more polished version of this interface where you can manipulation pixels more directly and make your own nodes.
Video consists of many frames and you have to apply the same but slightly different transformations to each frame. Building a pipeline (via nodes or not) to describe these repeated changes is worth the extra effort.
Outside of batch jobs, image editing tasks are generally one offs with image-specific actions and building a change pipeline is unnecessary work.
At the end of the day, both workflows are different tools, like hammer vs. mallet.
Afaik there is a small photo/video editor called CameraBag which displays adjustments and filters as little boxes laid out in a row and you can enable or disable them.
forgive my ignorance but does what you’re talking about with the “node-based compositing” basically boil down to how blender does it’s editing in a way?
Nuke and DaVinci Resolve are the industry standards for compositing. Node based editing graphs are often called “non-destructive” or “procedural”. It’s basically pure functions in a programming sense.
Blender geometry nodes take this approach for modeling. The rest of blender is destructive in that any operation permanently changes the state.
I wish the GIMP offered something like this.
You can do all that with comfyui now
Layer effects, though?
I'm still butthurt about when Blender introduced a node editor and confused me, I lost all my Blender expertise at that point. (The persistence of a vestigial old way of doing things only makes it worse, because of course I want to try to do everything without nodes, and then I don't have any guidance because all up-to-date docs and tutorials talk about nodes all the time. Nodes! Ruining everything!)
A list of node based image editors https://gist.github.com/SMUsamaShah/c923a0af4543ee2746979328...
But none of these is doing it all using shaders.
Great! Would love it if I could drag any number up/down to change it and see the result in real time. Same thing for entering a new value: see the result as I type. Right now editing the value removes that block's effect, which makes it very hard to tweak and play.
Maybe check out vvvv[1] and the VL.Fuse[2/3] library for realtime node based shader programming.
I am unable to figure out how to use this.
I move blocks around. Some blocks I can attach to the bottom of "run this program" block and they clearly run. But I was unable to add any more blocks that did anything.
The screen is two dimensional so I was expecting to be able to put processing blocks anywhere. Sure I can but they do nothing.
What's missing (for me at least) is an explanation of the user interface.
The interface needs some work if you're not familiar with block based programming environments. Dropping things in place is a bit difficult compared to Scratch or Snap.
> Some blocks I can attach to the bottom of "run this program" block and they clearly run.
The blocks with a square side are commands you can attach to each other to make a program. The round blocks go in the round slots on other blocks.
> The screen is two dimensional so I was expecting to be able to put processing blocks anywhere. Sure I can but they do nothing.
You can click on them to run them without them being connected to a "run this" block. Whatever has a yellow outline is running.
Thank you. I got a little further this time. The UI needs a LOT of work to make it friendlier. It's difficult to enter numerical values, and often the values just go blank.
I suppose there's a reason node editors use a block and line interface mainly.
I'd actually prefer a tree-structured approach, where you could turn a subgraph into a single block, allowing you to structure appropriately.
Uh if it wasn't clear, I was asking HN for a bit of help. The editor does look very cool.
Note it almost works on an iPad. It seems that it should be possible to scroll the palette of blocks but I can’t. I had trouble with it loading the one that makes a circle, it displayed only a tiny piece for a few seconds before finally showing the whole.
Otherwise though it is most of the way there to being a really fun tablet app!
Very nice ;)