IIRC MacRuby used to compile to native code on OSX using LLVM, and was supposed to support native OSX APIs and Objective-C frameworks. It always seemed like a neat idea, and a slick integration, but I guess Apple moved to Swift instead.
I'll have to pick up a copy of this "Ruby Under a Microscope" book when the new version comes out. I've always liked Ruby, I just haven't had much chance to use it.
Typical. I may get absolutely destroyed for this, but being professionally proficient in a ton of languages, including Ruby and the ones I mention below, and the ones I'm about to mention:
This sounds like Microsoft when they moved from VB6 to VB.Net. At least they have a good thing going with C# though.
VB6 was quite an interesting beast. You could do basically everything that you could do in languages like C/C++, but in most cases, you could churn out code quicker. This even extended to DirectX/Direct3D! For Web pages? ASP Classic.
The tl;dr is that I really wish that ease of development were prioritized along with everything else. One of the reasons I like Ruby is the elegance of the language and ease of using it.
Note that I've been using it since the mid 2000s or so, but not exclusively (both it and VB6 defined my career, however). C# is my second most favorite.
If Ruby had the GUI design tools VB6 had, it would be interesting to look at the popularity stats
Anyway, I'm rambling, so there is that. ;)
Once a YJIT block executes enough times to warrant compilation, how does this system keep track of which types to compile for? Each block is tracking how many times it's entered, but not how many times it's entered for int or float or whatever types; so in the given example how would Ruby handle the compilation of the "opt_plus" stub when the input types may vary?
And by what process is the correct compiled block used depending on the input variable types?
> To find hot spots, YJIT counts how many times your program calls each function or block
At first glance this seems too simple. Compare it to JavaScript JITs, which IIRC can compile hot spots even in functions that are only called a few times (e.g. those that contain heavy loops) via on-stack replacement. (Although I’ve also heard on-stack replacement called a “party trick” - more useful for optimising benchmark scores than for real code.)
But on the other hand, Ruby’s language design might help here. Idiomatic Ruby uses blocks for loop bodies - so can Ruby JITs optimise long-running loops by treating the loop body as just another function?
> Idiomatic Ruby uses blocks for loop bodies
Yes that's something I want to dig into and explore in this chapter... when exactly does Ruby's JIT compiler activate and optimize our code? And you're right: since Ruby will JIT blocks as if they were separate function many loops will be optimized using this simple heuristic.
Really happy to see Pat keeping it up! His first Ruby under a Microscope book but also his blog posts are amazing and a major source of inspiration for me. I did meet him personally in a Euruko conference. Such a great person.
What a lovely comment - thank you!
I find that using C as an intermediate step really helps conceptualize this process. It can be tough to imagine how to represent a language like ruby as C. Essentially you have to start from the point that everything is an object and method calls on objects, then build up from that. Then C to assembler is more manageable. Ymmv.
I loved Ruby Under a Microscope when I first read it, and using that knowledge was able to have fun with some CTFs years ago.
I haven't kept up with the evolving Ruby implementation internals, so I will sure as heck buy this new version of the book.