"Some might say "just get a better computer". This is why getting a better computer is bad:
1. Affordance: A lot of people, especially from 3rd world countries are very poor and can't afford to buy hardware to run Turbobloat.
2. e-Waste: Producing computer chips is very bad on the environment. If modern software wasn't Turbobloated you would buy new hardware only when the previous hardware broke and wasn't repairable.
3. Not putting up with Turbobloat: Why spend money on another computer if you already have one that works perfectly fine? Just because of someone else's turbobloat? You could buy 1000 cans of Dr. Pepper instead."
Took the words from my mouth. What a great project. Please keep posting your progress.
"Screen resolutions from 320x200 to 800x600."
Still, higher resolutions were not just invented because of Turbobloat.
Important:
This was just a joke from the site, I actually took serious!
There is no 800x600 limit.
But also a convenient excuse to sell more ramm and disk space 'for the textures'.
Hard to know how to respond to that. This could be applied to virtually all technology changes that benefit users but also make money for someone else.
I assume you use a refrigerator and not a hole in the ground with ice. Have you been manipulated into giving money to Big Appliance?
Somebody in rural Africa once told me, "one advantage you have living in a colder area is that you don't have to run your fridge for half the year!" I honestly didn't have any good answer for him as to why I do anyway.
Off topic but I always wanted a fridge that uses cold outside air to cool in the winter.
That sounds either really difficult to make and maintain or an absolutely fridge industry destroying innovation. Given weather and stuff I fear the first. Sick idea tho. I know nothing of fridge engineering besides basics so could be way off.
To an absolute hardliner for appropriate technology, probably -- but simplicity isn't necessarily all-or-nothing, and (IMO) helping people pull off cool things with simpler tools isn't so bad.
Sure, but we're not talking about how to irrigate a field here, we're talking about being limited to 600x800 resolution when playing a game.
Some people were teenagers when that was the best you could get, so I'm guessing they see it as a "good old days" baseline that they can be principled about while indulging their nostalgia.
I can see that, but I think calling it just nostalgia-driven is judging a book by its cover.
First off, I want to say you can totally have a design ethos that covers game engines as much as irrigation systems -- Lee Felsenstein explicitly cited Ivan Illich's notion of 'convivial technology' as an influence on his modems. And Illich mostly talked about bicycles.
What I see in this project is a specific kind of appropriate technology -- 'toaster compatibility' -- mixed with conscious adoption of old methods and aesthetics to serve and signal that end. Which is cool, IMO.
HTMX uses similar techniques in trying to 'bring back' hypermedia and reduce dependencies, although I think they're after a different kind of simplicity. And of course, their Hypermedia Systems book makes similar nods to 90s-software aesthetics: https://hypermedia.systems/
I remember that was the best I can get and I was thrilled for it at the time. But then I was even more thrilled when Far Cry came out. Then Crysis ... why would I go back? Now you surely can argue, that nowdays the creativity got lower in favour of just more textures, but I like to have both.
Still, for a simple game limiting to 800x600 for performance and dev reasons - why not? But for me it means I see no use case for myself.
You probably missed it in another subthread, but that limit was a joke on their website, not an actual limit.
There is no such resolution limit. That was a joke.
It is enough to make gameplay the main challenge?
I would argue refrigerators provide a lot more utility for most people than high poly counts.
I think I’ve gained more utility from being able to look at 3 spreadsheets at once than I’ve gained from my refrigerator(not if we’re counting the refrigeration of the supply chain for food and medicine then that wins out by a landslide)
Most people don't need 3 monitors. Pretty much everyone needs or has a fridge except for the least fortunate in society. He said most people, so u just fall in the much, much smaller minority with a bit of a questionable claim. Like, If u had to give up one, it'd be your fridge over monitors? Utility of the monitors runs out when u have to spend time getting fresh ingredients every other day.
Fake Optimization in Modern Graphics (And How We Hope To Save It):
Dude is pitching and wanting funding for THEIR solution from the vids I saw, not a general industry change or free fix.
Also their AI upscaling makes it look like the guy is wearing foundation and makes it hard to take seriously lol.
>But also a convenient excuse to sell more ramm and disk space 'for the textures'.
Except different companies sell different things. This is like the conspiracy that women's pants don't have pockets to sell more purses.
"This is like the conspiracy that women's pants don't have pockets to sell more purses."
Oh my god, this explains everything!
(btw. I recently learned, that the 9/11 inside job conspiracy evolved. Nowdays the standard theory is, that there were not even planes in the first place, just bombs and smoke)
I know one thing for sure. No airplane crashed into building 7. I also know that terrorist passports are made from the strongest material known to men.
I cant tell if you're on the side of conspiracy or not but you are correct that no plane crashed into building 7. Debris fell from 1 and 2 and set the building on fire, and since there was no fire suppression, it all went up pretty badly
Textures are bad, but screen resolution is good.
Ya gonna just leave empty pixels on display?
Shaded, of course
Is that a hard wired limit? I know nothing about game engines, so I'm a bit in the dark why it would only support up to that resolution. Is this about optimized code in terms of cpu cache aligned instruction pipelines etc?
"Is this about optimized code in terms of cpu cache aligned instruction pipelines etc?"
That is what I would assume, but so far I did not found a reason explaining the limit. Might also just be like it, because the author likes it like it.
Author stated in the thread that the limit doesn't exist. It's just a joke
They say that but the engine seems to require an OpenGL 4 GPU while the graphics look like something that could be done on a Voodoo card.
Requires a 15 year old card (so, 2010.) Six years after Half Life 2 but looks like Half Life 1, which shipped with a software renderer (no GPU needed at all!)
I fear the turbobloat is still with us.
Ok, so one the one hand we have one of the most universally acclaimed PC games in history, with a team of amazing programmers and artists behind it and a 40 million dollar development budget, and which represented the cutting edge of what was possible at the time in terms of squeezing every bit of performance out of a machine. On the other we have a one-person hobbyist project that is trying to make a statement about consumerist expectations for more, more, more.
If you're sincere about that comparison then I think you're missing the point.
Being able to run something on fifteen year old machines is still plenty anti-turbobloat. And I suspect the 2010 requirement has more to do with the fact that it's pretty difficult to debug software for 1990s hardware that you don't have (or lack proper emulation for).
And if you go back far enough one reaches a tipping point where supporting old hardware can get in the way of something running on new hardware, especially if we're talking about games, unless we're really careful about what we're doing and test on real hardware all the time. Not very realistic for a one-person side project.
That six year gap between HL2 and 2010 is considerable, so I don't think I'm being terribly unfair. Also, the article invited the Half Life comparison.
What is ‘turbobloat’?
From context, I interpret it to be ‘graphics tech I don’t like’, but I’m not sure what counts as turbobloat.
The whole post in tongue in cheek, it just means "features the game you're making doesn't need (like modern graphics with advanced shaders and super high resolution requiring the latest graphics cards)".
If you're making a game that needs those features, obviously you'll need to bloat up. If you're not, maybe this SDK will be enough and be fast and small as well.
Manufacturing and shipping a new computer can be worth it long term. Improvements in performance and energy consumption can offset the environmental impact after some time.
Of course for entertainment it’s difficult to judge, especially when you may have more fun on an old gameboy than a brand new 1000W gaming PC.
> after some time.
This is doing a lot of heavy lifting in this sentence.
What you're talking about is called the embodied energy of a product[0]. In the case of electronic hardware it is pretty staggeringly high if I'm not mistaken.
Yes it can be. Last time I did the maths for one of my use cases. It was a matter of a few years, when replacing a few old amd64 boxes by a single Mac mini.
I'm starting to believe there is an external force that drives down the quality of game engines over time. In most tech, the things that catch on are the things that are the easiest to develop curriculum for. The shape of a node-based editor like Unity is uniquely suited to explaining over a number of classes. (Source: I had to learn Unity at my University) On the other hand, an engine like raylib can be grokked in an afternoon, so a university-level raylib class wouldn't work. So you have all these amateur game developers and programmers coming out of diploma mills, and all they know is Unity/Unreal, so companies hire Unity/Unreal, so universities teach it, etc. See also: Java being popular. Then of course, all these companies have wildly different needs for their Unity projects, so Unity, being a for-profit company that serves its customers and not a single disgruntled programmer, has to conform their engine. So you end up with 'turbobloat.' (amazing term, btw)
The Half-Life and Morrowind engines are in a unique situation where they're put together by enthusiastic programmers who are paid to develop stuff they think is cool. You end up with minimal engines and great tech, suited to the needs of professional game developers.
This seems like something that sits in between a raylib and a Unity. I haven't used it, but I worry that it's doesn't do enough to appeal to amateur programmers, but it does too much to appeal to the kind of programmer who wants a smaller engine. I could be very wrong though, I hope to be very wrong. Seems like the performance here is very nice and it's very well put together. There's definitely a wave of developers coming out frustrated from Unity right now. As the nostalgia cycle moves to the 2000's, there's a very real demand to play and create games that are no more graphically complex than Half-Life 2.
Anyway, great project. Great web design. Documentation is written in a nice voice.
The other thing to remember is the games and the engines built together handle each other - Doom couldn't have a floor above another floor (engine limitation because of CPU limitations) so the level designers created tricks to make it feel like it did.
When you're designing both you can take advantage of features you add but also avoid the ones you can't do well - or even change the art style to "fit" the engine - pixelated angular mobs fit Minecraft quite well, but once they start getting more and more detailed you're in an "uncanny valley" where they look worse and more dated than Minecraft - until you finally have enough polygons to render something decent.
Oh, absolutely. I maintain the engine for my video game and it's ultra-minimal tailored to my needs. That leads to better performance, and a much slimmer build size. (currently sitting at ~900KB for the optimized build of a nontrivial game, assets bundled separately). It's also a better development experience, imo.
My argument was mainly about these more generalized engines, like raylib, 'Tramway', or Source.
A guy who worked on Bioshock (lead design?) said in an interview:
"At work if we want to experiment with a new idea I have to assembly a team, and spend at least a month before we have something we can work with. Meanwhile, at home, I can make a whole Doom campaign in one evening."
(quoting from memory, sorry)
There are new games that still use (modern forks of) the Doom engine!
https://store.steampowered.com/curator/42392172-GZDoom-Games...
It's like with cyberpunk if they didn't use red engine which is horrible bloatware they could've finished the game in half the time with half the people and it would run on a 10 year old laptop in 60 fps. /s
I think it's more acknowledging the gain in development efficiency you can get by working on something with lower fidelity.
I love library based game dev, like raylib or libgdx, but there is a reason that games like slay the spire moved to unity and then godot for their sequel.
That is to say, I don't think people are using Unity because they were mistaught by complexity loving professors.
Another thing related to this that I found kind of interesting, is this post [0] (unfortunately on twitter) of the developer of Caves of Qud, where they fully ported their game from Unity to Godot as an experiment, showing that they seem to have built the game around a single node, essentially just using Unity (and then Godot) as the presentational layer, similar to a simple graphics library type thing, basically ignoring the whole node system of either engine.
I wonder if this kind of architecture might also be a pretty good approach. The fact that they were able to port the game to another engine within a day is pretty impressive.
I don't think that the developers of Slay The Spire were taught by complexity loving professors, no. But education does more than influence the people at universities. Education informs norms, traditions, and styles that permeate through industries. An example from outside tech: the music notation app Finale found a strangle hold on the education market, and now it's one of the standards for notation, despite being the worst option (source: have you tried Finale?).
I've never played the game, but my understanding is that Slay the Spire largely impresses on a design and artistic front, not a technical one. Its engine requirements were not based on feature set or code quality, but on what developers knew. So they probably picked Unity because it was ubiquitous. Education starts the problem, and then devs who need something common they care hire for continue the problem. I don't blame devs for this, it's the right choice to make and obviously Slay the Spire is great, but I am saying that this is a force that drives down the quality of game engines.
No, they started with a code only framework (my beloved LibGDX) and then moved to Unity/Godot for the sequel for pragmatic reasons. See my other comment.
Being ubiquitous was part of the decision, yes, because it means there are many high quality plugins instantly integratable which is a huge time-saver.
Fair, though when I say "quality decline" I'm mostly talking about extraneous useless features and overly complicated node-based architectures that require GUIs. There are simpler ways to do all of this stuff. This engine, Tramway, is proof of that. Godot sits somewhere in the middle, I've used it a little but I don't know enough about it to say whether it's overly complex or not.
You are correct: I definitely agree that not all gamedevs should be making stuff from scratch, but I also think that Unity is a little too much. There's a good middle ground somewhere slightly above raylib.
My argument is that the promotion of engines that live near this middle ground is blocked by education: people who want to be able to sell long courses to the people who look up "how to make a video game."
I’m curious, what was/is the reason? I would like to learn more about the tradeoffs people are experiencing.
This is mostly on switching from Unity to Godot: https://caseyyano.com/on-evaluating-godot-b35ea86e8cf4
But also features this brief comment on game libraries:
"More than a Game Library: Having worked in SDL and LWJGL I’d like a bit more handholding. A few in-house APIs for loading/unloading resources, font stuff, and display handling please. I don’t want to write those; I want to make games!"
and some words on LibGDX in specific
"The reason we chose LibGDX for Slay the Spire was because it could do PC, Mac, and Linux. Yes, it runs in a JavaVM and it has all sorts of problems but it’s write once, run anywhere amirite? No. It don’t run on consoles and Mac and Windows updates constantly break it. "
All that said, I still love LibGDX, raylib, löve, and if I was going to make a game I'd use one of those because I think they're more fun. But I'm also not doing this professionally, on a deadline, and with a requirement to work on consoles
Games used to be crisp as hell, and now they run like shit, crash, and take 150gb to download, and 150 years to launch. If we played games for graphics, one of the most popular MMOs wouldn't be based on a browser game from 2002, in fact we wouldn't be playing games we would be playing real life.
Look at what Epic Games did with fortnite. They killed a competitive scene game that ran smooth for turbobloat graphics and skins.
> and programmers coming out of diploma mills, and all they know is Unity/Unreal, so companies hire Unity/Unreal, so universities teach it, etc.
There is a similar phenomenon with ArcGIS.
Surely there's something good about Unity and its nodes if games like Kerbal Space Program can be made with it.
"A thing should be a thing. It should not be a bunch of things pretending to be a single thing. With nodes you have to pretend that a collection of things is a single thing."
Just want to say this line was great, very Terry Pratchett. Feels like something Sam Vimes would think during a particularly complex investigation. I love it and hope you keep it moving forward.
Haven't gotten a chance to mess around with it, but I have some ideas for my AI projects that might be able to really utilize it.
In isolation, isn't the quote prima facie so bad and so wrong though? We think of collections of things as single things constantly. A human is a collection of body parts, body parts are collections of chemicals, chemicals are collections of molecules, molecules are collections of atoms... and yet at each level we think of those collections as being single things. Not being able to do that is just... absurd.
The project looks awesome though.
Agreed. Type systems are nearly always "temporal" yet are too simply designed to address that.
"Temporal" to mean that at any given slice of time during a running application all objects have a signature that matches a type.
Yet most programming languages only allow compile-time analysis and "runtime" is treated as monolithic "we can't know at this point anything about types"
I think maybe it is intended as a critique of systems where the individual parts don't compose or scale particularly well, where it feels sort of hollow to call it a "system" because of how uncoordinated and inefficient it is at the "single things" layer.
I think the point is that a body is obviously and intuitively a thing, and doesn't need any pretending. Whereas take something like a marketing brand that has been spread too thin over a bunch of disparate products, everyone has to pretend really hard that it is one thing.
Yes! In programming speak, you're talking about levels of abstraction.
The thing about nodes is a joke like around 80% of the text.
It sounds like the sort of thing Sam Vimes would say before being begrudgingly forced to admit, after being forced by Sybil to undergo some painful personal growth, that maybe, sometimes, a thing might need to be more than just a thing.
And that Vetinari’s entity component system might seem complicated but it works, damnit and it makes the city function.
And once Nobby says he likes Tramway, everyone realizes Vetinari was right all along XD
(I'm just glad someone got the reference)
> And once Nobby says he likes Tramway, everyone realizes Vetinari was right all along XD
Well, except for Detritus
Well OBVIOUSLY except for Detritus!
Things is too many things to count.
Unless we are talking about liquid-cooled Detritus.
This quote is likely intended for people who've tried other solutions and disliked them, but as someone who's never used a game engine of any kind, I'd appreciate someone giving me an ELI5 of how "nodes" relate to "pretending that collections of things are things."
Is the problem here that using a nodal editor encourages/incentivizes you through its UX, to assign properties and relationships to e.g. a `Vector` of `Finger`s — but then you can't actually write code that makes the `Vector<Finger>` do anything, because it is just a "collection of things" in the end, not its own "type of thing" that can have its own behavior?
And does "everything is an Entity, just write code" mean that there's no UX layer that encourages `Vector<Finger>` over just creating a Hand class that can hold your Fingers and give the hand itself its own state/behavior?
Or, alternately, does that mean that rather than instantiating "nodes" that represent "instances of a thing that are themselves still types to be further instantiated, but that are pre-wired to have specific values for static members, and specific types or objects [implicitly actually factories] for relationship members" (which is... type currying, kind of?), you instead are expected to just subclass your Entity subclass to further refine it?
In a node-based engine, everything is just a graph of mostly ready-to-use nodes, all you do is create nodes, parent them, delete them; behavior can be attached to specific nodes. There may be no clear boundary where an entity "begins" and where it "ends", because everything is just a bunch of nodes. I'm not sure why the author is against it, in a proper engine you can quickly prototype mechanics/behaviors by just reusing existing nodes/components, and it's very flexible (say, I want to apply some logic to all child nodes recursively -- no problem; or I want to dynamically remove a certain part of a character -- I just unparent the node), and often such engines allow to test things without stopping/recompiling the project. On the other hand, OP's engine apparently requires you to do everything by hand (subclass Entity in code) and recompile everything each time. Basically, a node-based engine is about composition, and OP apparently prefers inheritance.
There may be no clear boundary where an entity "begins" and where it "ends"
Is this useful to not know where the boundaries are? Sounds like it can become a night mare.
I think the paragraph after is really interesting:
“Also when creating things with nodes, you have to go back and forth between node GUI and code.”
You can see Godot’s Node/GDScript setup as a bit of a response to this argument. Or, they try to make the “going back and forth” as seamless and integrated possible with things like the $ operator and autocomplete.
That said, I do think at the end of day, the “thing is a thing” mindset ultimately prevails, as you have to ship a game.
I’ve been trying to learn godot for years and I’m not doing so hot. This chatter feels very relevant to my struggles but I’m not the best with software design, so what do I know? I was in a tizzy the other day and spammed my thoughts out about it, I hope it’s relevant here.
trying to wrap my head around using scenes vs. nodes in something simple like a 2d platformer.
Platforms:
My thinking: I'm gonna be using a ton of platforms, so it'd make sense to abstract the nodes that make up a platform to a scene, so I can easily instance in a bunch.
Maybe I'm already jumping the gun here? Maybe having a ton of an object (set of nodes) doesn't instantly mean it'd be better off as a scene?
Still, scenes seem instinctually like a good idea because it lets me easily instance in copies, but it becomes obvious fast that you lose flexibility.
So I make a scene, add a staticbody, sprite, and collision shape. I adjust the collision shape to match the image. Ideally at this point, I could just easily resize the parent static body object to make the platform whatever size I want. This would in theory properly resize the sprite and collision shape.
But I am aware it's not a good/supported idea to scale a collision shape indirectly, but to instead directly change its extents or size. So you have to do stuff based on the fact that this thing is not actually just a thing, but several things.
This seems like a bad idea, but maybe one way I could use scenes for platforms is to add them to my level scene and make each one have editable children. Problem with this is I'd need to make every shape resource unique, and I have to do it every time I add a platform. This same problem will occur if I try duplicating sets of nodes (not scenes) that represent platforms, too. Need to make each shape unique. That said, this is easier than using scenes + editable children.
Ultimately the ‘right’ way forward seems to be tilemaps, but I wanted to understand this from a principles perspective. The simple, intuitive thing (to me) does not seem possible.
When I ask questions about this kind of stuff, 9/10 times the suggestion is to do it in a paradigmatic way that one might only learn after spending a lot of time with an engine or asking the specific question, rather than what I would think is a way that makes dumb sense.
I'm used to the old old school way of doing things, meaning you write shit in a text editor and then run it. There's no parenting system, no hierarchies, no inheritances, no massive trees of various classes/subclasses that you have to manage. Godot goes beyond friction and actively pisses me off because what should just be ten seconds of writing in a new text document turns into up to sometimes two or three minutes of interacting with the GUI because you have to create a new project, save it, make a scene node, create an object node, create a sub-object node, create an action node, create a container node, then you can start editing code, but only once you link whatever it is to that specific object instead of it being a general script you can re-use, because re-using it means making copies of the parent object and-- It's too complicated.
A lot of 2D game engines are near frictionless because they're just "write and save" style simple, and Blender Game Engine was actually great about translating this to a UI, and more importantly a UI dealing with 3D since every object in the viewport could just have it's own little code block attached to it just by clicking it. It was no different in function than saving the .py file in a new folder, really. This method Unity "pioneered" of everything having to be part of a giant tree in the asset manager is such a slog and makes keeping track of anything during iteration a nightmare. I still prototype in BGE sometimes because every other 3D engine sprawls too quickly and has so many unnecessary steps.
If somebody could just write a text-only "write and save" style editor like LOVE2D but for 3D (and support it for longer than two months) that would be amazing.
Honestly this is the kinda stuff that put me off Godot. A million ways to do things and they all seem bad or feel like they’re going to shoot me in the foot later. Somehow I never had these issues in Unity
I think every developer finds footgun issues with various tools for their aims. Or, I know developers that express similar sentiments towards Unity (or Unreal, proprietary engines, etc.).
When a game team is successful, it can often stem from having picked tooling and workflows that enabled them to be productive enough and avoid enough pitfalls. That’s going to change from project to project and team to team.
Man maybe I should have started with unity but I’m such a fucking hipster I just never go with the popular thing. Ultimately happy I don’t have to worry about the licensing BS and I still get to claim hipsterdom, but it’s clashing with my desire to work in ways that make sense to my poo brain.
Are there any Godot FAQs or documentation with "blessed" paths for the various mainstay gamedev needs?!
The problem is "a thing is a thing" only gets you those exact things with those exact thing-behaviors.
Sharing behaviors or making things look or act like a little bit like this other thing becomes an absolute nightmare, if not out right impossible, with "a thing is a thing."
There's a reason graph based systems or ECS is basically the corner stone of every modern engine. Because it works and is necessary.
Unfortunately everything is a collection of things pretending to be a single thing, even single things. The best we can do is pretend, or avoid finding out.
I wholeheartedly agree with the turbo bloat problem. Machines are so much more powerful nowadays, but most programs feel actually slower than before.
Very cool project. And the website design is A+
> but most programs feel actually slower than before
I feel like this is only true for people who happened to luck out with slightly overpowered hardware in very specific time periods.
As someone who used pretty average hardware in the windows 98/2000/xp era as a teenager even a low end modern laptop with an ssd running Windows 10/11/KDE/Gnome/Whatever is massively more responsive even running supposedly bloated webapps like vscode or slack.
Well... I recommend you to try an old Amiga 1200 . You will find a big surprise how this 20 Mhz machine it's highly responsive, and boots faster that any current machine with Windows 10/11. However, it would not look fancy to our current eyes.
Loading times from floppies wasn't even remotely quick.
Launching a game off a floppy was much faster than opening the bloatware that is steam and then downloading gigabytes of bloat.
> 20 Mhz machine
14 MHz !
I don't understand the term "turbobloat", never heard it before (and I've made games), the author doesn't define it and a quick search returns the submission article on Kagi, while nothing relevant at all on Google.
So, what does it mean? Just "very bloated"?
Edit: Reading around on the website and seeing more terms like "Hyperrealistic physics simulation" makes me believe it just means "very bloated".
I took it to mean "increasingly bloated over time relative to hardware, phased in a funny, irreverent way." It's a vibe thing, not a definition thing.
I don’t think it is a real word. “Turbo” means “very” or more accurately “extremely,” but is typically only used in a positive context, e.g. turbocharged. That makes the turbobloated neologism ironic and funny.
It’s funny that the "turbo-" prefix is simply derived from the word "turbine", as in a turbocharger works by means of a turbine, similarly "turbojet" or "turboprop" or "turbopump", but has turned into an augmentative prefix due to the connotations, and also because of the parallelization of "turbocharger" with the earlier term "supercharger", meaning a charger powered mechanically by the crankshaft.
Ontologically, it implies the existence of Turbobloat 3000.
If bufferbloat is increased latency caused by excessive use of increasingly available RAM, then turbobloat is increased latency caused by excessive use of increasingly available CPU.
Certain vintage hardware had a "turbo" button to unleash the full speed of the newer CPUs. The designers blind to the horrors of induced demand.
Because of that factor, I'm not quite sure what's going on with the article or comments here altogether.
If you gave it to me in a cleanroom and told me I had to share my honest opinion, I'd say it was repeating universally agreeable things, and hitching it to some sort of solo endeavor to wed together a couple old 3D engines, with a lack of technical clarity, or even prose clarity beyond "I will be better than the others."
I assume given the other reactions that I'm missing something, because I don't know 3D engines, and it'd be odd to have universally positive responses just because it repeats old chestnuts.
> Most Unity games look like very bad, even with fancy shaders, normal mapping and other techniques.
This seems to be an increasingly common point of view among those of a certain age.
It is definitely the case that the art of a certain sort of texture mapping has been lost. The example I go back to is Ikaruga, where the backgrounds are simply way better than they have any right to be, especially a very simple forest effect early on. Some of the PS2 era train simulators also manage this.
The problem is these all fall apart when you have a strong directional light source like the sun pointed at shiny objects, and the player moves around. If you want to do overcast environments with zero dynamic objects though you totally could bypass a lot of modern hacks.
Yes. And the thing is, some modern games ARE overcast with no dynamic lights, and then go on to use Lumen of all things. This was the case with Silent Hill remake, and that thing runs very slowly, looks WORSE on PS5 Pro, the grass looks worse than in older games and so on.
Seriously, the plot of Silent Hill was invented to justify optimization hacks, you have a permanent foggy space called "fog space" to make easier to manage objects on screen, and the remake instead stupidly waste a ton of processing trying to make some realistic (instead of supernatural looking) fog.
It's not the 90s anymore. Using basic linear fog with ultra-realistic assets would just look terribly out of place.
The point about Lumen stands though. Baked lighting would have been much better in this case.
It's worse than that, in the Silent hill remake, everything is being rendered behind the fog too, yes you read that right, they render a whole town with complex gemetry to hide it with fog after so you see none of it.
Most good looking games built with Unity don’t ’look like Unity games’ so people don’t think of them as constituting an example of ‘what Unity games look like’. So the archetype for ‘what a Unity game looks like’ remains at ‘pretty rough’.
The ‘art’ of making stuff look good has not been lost at all. It’s just very unevenly distributed.
When a team has good model makers and good texture artists and good animators and good visual programming, it looks great, whether it’s built in Unreal or Unity or a bespoke engine or whatever.
I don’t think that is what people are getting at, since they uniformly want more texture detail.
There are a lot of technically polished Unity titles that get knocked because they look like very well rendered plasticine, for want of a better description.
For example, there was an argument on here not too long ago where various people pushing the “old graphics were better” (simplification) did not understand or care that the older titles had such limited lighting models.
In the games industry I recall a lot of private argument on the subject of if the art teams will ever understand physically based models, and this was one of the major motivations for a lot of rigs to photograph things and make materials automatically. (In AAA since like 2012). The now widespread adoption of the Disney model, because it is understandable, has contributed to a bizarre uniformity in how things look that I do think some find repulsive.
Edit to add: I am not sure this is a new phenomenon. Go back to the first showing of Wind Waker for possibly the most notorious reaction.
There's an insistence that materials can overcome lacking texturing and normal mapping. It's not true, but it's a result of a lot of marketing fluff from things like Unreal Engine being misunderstood or misrepresented. Did you know that in Super Mario Sunshine, for "sharp" shadows the Gamecube was unable to render, that they actually used flattened meshes instead? In Delfino Plaza the shadows under the canopies near the Shine Gate are actually meshes instead of textures. Meanwhile the tile plaza that the mesh shadows lie on looks so nice because it's not one giant texture, it's actually several dozen 128x128px textures all properly UV mapped. In a modern game you'd get two brick textures and a noise pattern to blend them, and they'd all be 2048x2048px with the shadows being raytraced so they have sharper edges.
Ironically as we've gotten hardware with more VRAM and higher bus speeds we've decided to go with bigger textures instead of more of them. The same with normal mapping, instead of using normal mapping alongside more subdivided models we've just decided that normal maps are obsolete and physically modelling all the details is technologically forward way. Less pointy spheres is one thing, but physically modelling all the cracks and scrapes on the sphere is just stupid and computationally wasteful.
> Ironically as we've gotten hardware with more VRAM and higher bus speeds we've decided to go with bigger textures instead of more of them. The same with normal mapping, instead of using normal mapping alongside more subdivided models we've just decided that normal maps are obsolete and physically modelling all the details is technologically forward way.
This right here is precisely what I alluded to in another reply as the motivator for generating meshes and PBR materials from controlled photography. Basically you now have enough parameters per texel, which interact in distinctly unintuitive ways, that authoring them is a nightmare, hence people resorting to what you describe.
Easier to market "more resolution" and "more polygons" than masterful use of uv mapping.
You can get something working quite quickly (especially with things like Unity) - but to get them looking amazing takes extra skill and polish.
Even a "2D" game like Factorio has amazing polish difference between original release, 1.0, and today.
(This can very obviously be seen with modded games, because the modded assets often are "usable" but don't look anywhere near as polished as the main game.)
I replayed Half-life 2 recently and was struck, even without high-res texture packs, how amazing the game still looks today.
I think this is because of how extremely cleverly they picked the art style for the game. You have a lot of diffuse surfaces for which prebaking the lighting just works. Overcast skies allow for diffuse ambient lighting rather than very directional lights, which force angle-dependent shading and sharp high contrast shadow outlines. And the overwhelming majority of glossy surfaces are not too shiny which also helps out a lot. All of these are believable choices in this run-down, occupied, extremely dystopian world. And the texturing with its muted color palette pulls it all together.
There's been a rumor going around that developers move away from prebaked lighting primarily because it complicates their workflow.
Prebaked lighting is a rather crude approximation that only looks good in certain scenarios. Correct dynamic indirect lighting provides a much better integration between different scene elements and better spatial cues. Movable and static objects can share the same lighting model and you don't get an immersion breaking situation where e.g. the one door that you can open in a hallway stands out because it has worse lighting. It is an overall win, not just during production.
That rumor didn't exist 20 years ago when Half Life 2 had come out. Pre-baked was the only way to go. Now we have performant ray-tracing.
That's why I think really good art direction beats raw graphical power any day. Source was pretty impressive back in the day, but the bit that's stood the test of time is just how carefully considered the environments and models are. Valve really put their resources into detailing and maximizing the mileage they got out of their technical constraints, and it still looks cohesive and well-designed 20 years later
Still baffles me how unnerving the Ravenholm level is even today. It's got a creepy, unsettling vibe, 20 years later, entirely due to really decent art direction.
Definitely. A hyper-talented team combining new physics-based gameplay, art style and rendering technology made something just amazing.
Half-life 2 has received multiple updates to shading and level of detail since it was released, so it looks a little better than it did at release. Still, it was already a visually impressive game at release.
I just replayed Half Life 2 less than a week ago! I also caught myself thinking, "the levels may not be as detail filled as modern games, but the artistic direction both in graphics and level design is better than many modern designers with bigger budgets."
Great! I really liked the intro, with the Socialist state-style architecture and processes, and that degrading infrastructure contrasting strongly with the sleek, modern weaponry held by the oppressors. I could've just walked around that world and been pretty happy with the game!
You might enjoy "Black Mesa", HL1 remade with the HL2 engine. Played it during the pandemic. No Regrets.
Black Mesa is how I remember the original game. Worth every second i spent with the game!
Great game - definitely doesn't work well on linux, natively or via proton. Just in case any linux gamers were thinking of buying it.
I don't have a windows machine at all. And until I got myself a Steam Deck I only played linux-native games. So I definitely finished Black Mesa on my 15 inch Ubuntu Dell with a dGPU.
Did you play the original Half-Life 2 from 2004 or one of the "remasters" (though they weren't called that) that comes every few years that updates the graphics and/or engine slightly?
Fair question - no, I just played whatever's on Steam, on Linux. Maybe the textures are higher quality, but I remember the physics-based gameplay fresh as when I was playing in 2004!
I don't think there's any official way to play the original 2004 version (or even the Source 2006/Episode One version either). The Xbox version is probably closest but they used palettised textures for the Xbox version - something that no PC version of Source ever supported - probably to get it to run okay.
That's such a pity, I always wanted to play HL2 on DirectX 6 mode.
There aren't any official methods, but with a little elbow grease, several ways to run a vanilla boxed copy of Half-Life 2 are outlined in this thread: https://www.vogons.org/viewtopic.php?t=70250
I've also wanted to run HL2 in DirectX 6 as well on period correct GPUs. Specifically a TNT2 Ultra and a Voodoo 5 5500 I have laying around. I just haven't gotten around to it.
Those were some fancy graphics cards when they came out.
Maybe you can? If -dxlevel 60 doesn't work any more I think there's a file called dxsupport.cfg (or something like that) that adjusts various graphical settings based on the DirectX level detected. I don't really know how it works but my understanding is that the engine figures out what version of DirectX you have installed and sets the DirectX level based on that, but all the controls is various graphical settings.
Yeah, it was great. They really pulled out all the stops when it came to cinematic quality on that one. They also did a lot of second order things like marrying the scenes to the plot that a lot of games don't well or at all.
As someone currently working with a little team trying to make low-poly games using Godot - this is awesome!
> Also when creating things with nodes, you have to go back and forth between node GUI and code.
> All of the mainstream engines have a monolithic game editor. It doesn't matter how many features you use from it, you still have to wait 10 minutes for all of them to load in.
These notes really resonated; the debug loop even with Godot, using minimal fancy features, felt a lot slower than other contexts I've programmed in. Multiple editors working around a single data file spec is also a cool idea! In finding that a unified IDE makes it easier for different developers to create merge conflicts, I could see having editors of a more specific purpose may also help developers of different roles limit the scope and nature of their changes. Keen to see how the engine progresses!
I am pretty proud of figuring out how to TDD a C# module without booting Unity for a hackathon last month.
Managed to contribute my bit from an underpowered netbook.
I had never written a line of C# before, but I'll be damned if I'm going to concede TDD from the CLI. I knew it could be done, and I made it work. Everybody thought I was crazy, though, and none of the sponsors' DevRel were any help.
And, of course, the biggest point of friction for us, that weekend, was our beefiest machine still had to boot and reboot the damned Unity IDE for a thousand years! Incredible the fetters some folks tolerate.
I'm not very familiar with Unity and it's limitations / difficulty of this task. What challenges did you encounter and how did you solve this problem?
Can you make it a bit less photorealistic? I'm afraid that people would confuse reality with the games created with it and it could pose a danger to society.
Do you plan to create some videos showing the process of setting up a basic example?
You reference "Turbobloat" and engines being "bloated" - which is to some extent fair. But it is maybe worth describing what that means to you - what features you consider "bloat" and which you have omitted from the Tramway project. To some the inclusion of an RPG framework may be considered bloat, for example, yet there is one present in Tramway.
That's why added it in as an optional extension. It is a part of the larger engine project, but it is completely optional.
I like the C++ principle of paying only for what you use.
Understandable, but the main thing was - you lean a lot on the idea of "TurboBloat" being this universally understood concept. And I think many people might have a vague feeling that a lot of modern software is slow and "bloated", but you may want to be clear on what you consider "bloat".
The RPG engine was just an example of why it may not be such a universal thing, I'm not saying it's bad - but clearly you think that is not "bloat" whereas to some it might be. So it's maybe better to head this off at the pass and just write a little paragraph with some examples of bloat you have observed in other engines that you have consciously avoided in Tramway.
> This article will cover the usage of the framework for beginners who are either scared of C++ or just bad at it
I'm in the latter camp and want to thank you for your "Getting Started" Page. The teapot appeared and I understood things I did not think I would understand. I do not have time to finish your tutorial at the moment (due to only having 30 whole minutes for lunch), but I want to, which says more about how entertaining and accessible it is than anything.
Did anyone else find the Design Patterns page? It's a score board with a goal at 100%. I love this so much.
Linked from the home page:
”Design patterns used 82%.
When all of the patterns get used, I will delete the project and rewrite it in Rust. With no OOP.”
I was looking for ages and still haven’t found this.
You have to click on "Enterprise Mode" to find
> Design Patterns Used
> 82%
Thanks!
That is legitimately hilarious. This whole thing is like some massive appeal to pragmatism.
It only supports up to 800x600 resolution? For real? I know people like low res games and this is targeting old hardware but that is surprisingly low to me given the touting of how optimized this is.
Think of it as a fantasy console, like pico-8 which despite the extreme restrictions is home to some incredible content that of which exceeds many big studio engines. The imposed ceiling now allows a solo dev or a team to now concentrate on delivering gameplay and vivacious content instead of graphical gimmicks which eat resources both for the consumers and creators.
Nobody argues that FTL, Minecraft, baba is you, Stardew valley, RuneScape, or dwarf fortress are not a high enough resolution.
Minecraft is a bad example. It uses low resolution textures, but the screen resolution is as big as your display. I'm not even sure what the maximum is.
I could be wrong however I believe all listed games can be run in full screen at modern resolutions.
Baba is You and Stardew would require scaling because their pixel art expects a certain gridsize to encompass the screen. Factorio might as well, I'm not sure. Dwarf Fortress is played in a terminal so whether it can be stated to even have graphics is debatable.
Very cool! There need to be more options for developers with lower-end boxes, for gamers with low-end hardware. Unreal Engine 5 is a lost cause nowadays without 64GB of RAM, Unity is a mess and there need to be more options than Godot.
In my youth I cut my teeth on the quake 2 sdk. And even without a 3D suite and a c compiler I could get creating. When the Rage toolkit became available, almost none of the community were as besotted with eagerness as they had done before. It was a 30GB+ download with some hefty base requirements. While rage could run on a 4 core machine, not many gamers at the time had 16 core Xeon’s and 16gb of ram! The worst the HL2 modding scene had to contend with was running Perl on windows.
Still waiting for bevy to get an official editor.
Love the entity init->use->yeet cycle. Fantastic terminology, may steal it.
void Entity::Yeet() {
yeetery.push_back(this);
I don't know anything about game programming but I quite approve of your sense of humor.
From the perspective of someone who's dabbled in 3D graphics, and has made an engine for 3D visualizations for my science projects:
What is blocking this from high resolutions, and dynamic or smooth lighting? The former is free, and you can do the latter in Vulkan/Dx/Metal/OpenGl etc using a minimal pixel and fragment shader pair.
There's literally nothing preventing you from dragging the edge of the engine window and resizing it, or calling the screen resize function from the C++ or Lua API.
That bit about 24-bit color and 800x600 resolutions was mostly meant to be a fun nod to promotional text that you could find on the backs of old game boxes.
The default renderer for the engine is meant to emulate what you could achieve with a graphics card that has a fixed-function graphics pipeline.
I'll do more modern renderer later, for now I am mostly focusing on the engine architecture, tools and workflows.
Great info! That answers my question entirely.
You said it's compatible with hardware from 15 years ago, but one of the examples have the graphical complexity of half-life from about 25 years ago, could this engine be optimized further to run on hardware from that vintage or at least closer to? Would be pretty cool making games that can run on a Ryzen 9950x 32 thread monster but scale all the way down to a 1Ghz Pentium III and a Voodoo 3.
The oldest computer that I have tried running this engine is a HP laptop from 2008, running a 32-bit version of Windows XP.
It seemed to work fine, but I did have some issues with the Direct3D 9 renderer. The renderer works fine on other computers, so I have no idea if it's a driver bug (Intel tends to have buggy drivers) or if it's a bug on my part.
The biggest problem with using old hardware is drivers. Older drivers will only work on older operating systems and it's difficult to find C++20 compilers that will work on them.
Neat project!
By the way, to see a great example of how a modern game can be made using the classic Half Life engine, look at the fan made game Half Life: Echoes [1].
It actually looks pretty decent, and the gameplay is top notch.
> It does what Godoesn't.
> I am not reinventing the wheel, I am disrupting the wheel industry.
I am laughing out loud
This is really cool. You should organize a game jam for it.
How is the wasm support? My main issue with Godot was large bundle sizes and slow load times. (GameMaker kicks its ass on both, but I never got the hang of it.)
I would say that it is way too early for a game jam.
The webassembly builds seem to work fine. A basic project would take up around 20MB and takes a couple of seconds to load in, so it's not great, but then again I haven't done any optimizations for this.
> Btw, the name is probably temporary
It's announced, and the name is fine, so it'll stick :)
This looks really cool, great work. One thing I want to preregister though: I bet against the whole Entity subclass thing. 60% of the way through the first serious-business project, you're going to RUE THE DAY. I'll look forward to seeing what people do :)
This is a really cool project, and I love the writing style.
I am also in the early days of writing a very primitive 2.5D Raycasting engine [0] (think Wolfenstein3D) and have just got to texture mapping. Very fun
Its open source and written in C, a pretty small and easy to follow codebase so far
[0]- https://github.com/con-dog/2.5D-raycasting-engine/blob/maste...
I have very similar, strong opinions about game engines and I think this is a great project. I am definitely going to mess around with this after work today.
This sounds pretty cool! I like the name too, I would keep it like that.
This website rules.
Just wanna say the website aesthetic is legendary. Very on brand.
makes me feel like a kid again.
Except that it would be way better if it wasn't arbitrarily limited to a tiny column. I have a large screen, use it please. Don't make me dig into the developer console to undo your fixed width in order to have a pleasant reading experience.
Lots of people don't think super wide text is pleasant.
Seems pretty entitled if you ask me.
I don't think it's entitled to point out that someone's choices make for a bad user experience. I'm not putting OP on blast and calling him the worst person in the world, I'm simply saying "this is really unpleasant, hopefully you change it to be better".
dude it's period correct just hit ctrl/cmd and plus and zoom in like the rest of us.
But it's not even period correct! A website of that era would've filled the entire screen no matter what the size, because hobbyists weren't doing weird layout shenanigans.
I saw "fixed function pipeline" and immediately think of RTX Remix. This could've been raytracing modded in to add Turbobloat lol
This is fantastic, actually. I love that this will let us create games in the late 90s FPS style but with all the niceties of modern hardware. Now if only I had any skill in 3d modelling...
The writeup, demos and proofs of concept, along with transparent roadmap/todos on the GitHub page are top notch. Great presentation. I definitely see myself trying this.
This is evidence of a great moment in modern indie game dev: the power of fun and simple prototyping.
I fucking love this!
Hope some initial tutorials become available. I’ll gladly contribute some but I need a little guide to get started.
> But what if all that you really want to make is just a lowpoly horror roguelite deckbuilder simulator?
Is this a reference to Inscryption?
Makes me wonder how far can we go with simple but high quality light maps.
It a practical way to bring global illumination to the masses without real time ray tracing
This is great! I'm wondering if there's anything on the roadmap for multiplayer support?
I think a project like this is a good idea with the popularity of retro 3d games and "de-makes" now days
Using a modern engine seems overkill
Wait, so what is the bit about Morrowind and Half life? Doesn't seem to be mentioned anywhere.
> When all of the patterns get used, I will delete the project and rewrite it in Rust. With no OOP.
https://racenis.github.io/tram-sdk/patterns.html
Love it.
Woah, it's cutting it close - just 3-5 aren't used! The Rust port might be on the horizon :D
Time to cook up a PR.
Can this be used as an alternative to Hammer to develop HL maps/mods?
It showed trenchbroom being used to make maps and I don't think that can be used to develop goldsource maps, so most likely not.
Damn this looks sweet! I’m gonna check this out. Cool project!
Can it run on a MS-DOS machine with 640 KB of RAM?
That would call for a Wii port ;)
racenis, what program did you use to draw the header graphic?
I dream of a Mac port, but it's beyond my skills.
I made the website header in GIMP. The logo in the repository README was made in a very old version of MS office.
Don't understand shit, but congrats on the website. Is this React 19 ?
Good. This is exactly what I've been complaining about for decades now...
I also have my own engine although it needs some refurbishment. I've never quite found the time to polish it to a point where it can be sold. It also runs on tiny old devices, although if you limit yourself to desktop hardware, that means anything from the last 30 years or so. It also has a design that allows it to load enormous (i.e. universe scale) data by streaming with most often an unperceptable loading time... on the iPhone 4 in about 200ms you are in an interactive state which could be "in game".
Unity and Unreal are top-tier garbage that don't deserve our money and time. The bigger practical reason to use them is that people have experience and the plugin and extension ecosystems are rich and filled with battle tested and useful stuff.
bespoke big company engines are often terrible too. Starfield contains less real world data than my universe app, but somehow looks uglier and needs a modern PC to run at all. mine runs on an iPhone 4, looks nicer and puts you in the world in the first 200ms... you might think its not comparable but it absolutely is, all of the same techniques could be applied to get exactly the same quality of result with all their stacks and stacks of art and custom data - and they could have a richer bunch of real world data to go with it!
>Unity and Unreal are top-tier garbage that don't deserve our money and time. The bigger practical reason to use them is that people have experience and the plugin and extension ecosystems are rich and filled with battle tested and useful stuff.
Both are effectively magical sandboxes where platform support is someone else's problem.
Unity is still pretty great, but it's chained to a company that has no real business plan to sustainability.
Unreal is okay, but developers aren't using it right. For any bigger project you should customized the engine for your needs. Or at the very least spend some time to optimize.
But we need to ship and we need to ship now.
Blame the developers not the tools.
i've been doing this for decades and my bedroom work had never done anything but put unreal and unity to shame. from top to bottom i can not understand the ignorance of their design from a simple "a programmer is making this" standpoint, it comes from a legacy of "a rookie wannabe with too much money had a good shot and too much promotion"
unreal is fucking awful, its a masterclass in how to not make:
* components
* hierarchies
* visual scripting
* networking
* editors
* geometry
* rendering
* culling
* in-game ui
* editor ui
* copy-paste
* kinematics
* physics integration
* plugin support
* build system
its just a tower of mistakes i learned not make before i dared to even enter the industry
it is fantastically and incredibly bad.
unity is a bit similar but they add c# complexity to the mix and in the beginning that was a much bigger disaster, especially going with mono. .NET was an enormous misstep by microsoft and remains so, although it improves over time they could have just not gotten it so incredibly wrong to start with.
i could go on.
i definitely blame the developers. of the terrible tools, i couldn't make that badly at most points in my career including the super early days in some cases.
they are also hard to fix because of the staggering depth of the badness.
if you would like more specifics feel free to poke, its more about not typing a wall of text than the cognitive load of knowing better, which is around zero.
oh... and the garbage collection is garbage that enables incompetents to make more garbage. never needed or wanted it. i had one hard memory leak to deal with in my life in native code. and a fucking zillion in their shit fest.
EDIT: i shit you not, it has not learned my first lessons from being an 8 year old trying to draw mandelbrot sets in qbasic.
If you can legitimately make a better engine in your basement, that's just as easy to use then please open source it. If it's in a high level language with types ( C#, Typescript, Haxe, Java) I'll personally donate 100$.
Both Unity and Unreal have cost billions to make.
Godot is cool, but GD script isn't fun( in general I hate learning a programming language for a single framework, dart is the last time I do that) and C# support is still ify. Godot tries to do everything Unity can, but can't do them particularly as well. The community is also a cult.
I've tried Godot like 3 times and it always feels like janky Unity.
During the Unity drama every single game dev post on Reddit would get a bunch of comments saying you should switch to Godot.
An open source game engine that doesn't accept PRs and is basically ran by 3 people.
Neat.
Personally my dream engine would be Haxe + an editor + docs + Web Assembly/Native/Mobile support.
But engines are very hard and expensive to make. For my current project, it's so text heavy I realized I'm better off just using React/HTML/CSS.
The game is meant to be played in a website, but it's going to be open source so you can run it locally if you wish.
I love the retro aesthetic of your website - it perfectly matches the spirit of the project. The detailed documentation and transparent roadmap on GitHub are excellent. It's clear you've put a lot of thought and effort into making this accessible for developers. Great job on the presentation overall!
Could call it Mega McBloatface(?)
The demo(s) should be linked from the page so that HN can complaint that the game is to hard.
https://racenis.itch.io/sulas-glaaze
https://racenis.itch.io/froggy-garden
It runs well in Firefox on my low end laptop.
Why aren't more people commenting about Dr. Pepper?
Can you add the rule:
@media (prefers-reduced-motion) {
.animated {
display: none;
}
}
to the page, please? no_gifs.css is alright, but I need to visit the page (and run JavaScript) before I can find and click it, and by that point the damage is done.love the revolving toilet
The filename is 'poland.gif', I wonder what's the message there.
It's the same GIF that's used in the polish milk soup song video.
In case you missed it, have a look at the page about animation: https://racenis.github.io/tram-sdk/learn/animations.html
10/10 choice of model and animation, this website is amazing.
Based.
I like the name. It's the SDK that gives the name meaning anyway.
License?
You've obviously put a lot of effort into this, but I'm always lost at how people publish something open source and forget to actually put a license on there. Since now it's technically closed source, hypothetically if you become a monk in the woods next week no one else can fork your code
I just realized that I had forgotten to actually add the license file to this repository. Added it now.
The license is MIT. Thanks for noticing.
An MIT license file was added (or edited) a minute ago in the repo :)