• FractalHQ 17 minutes ago

    Definitely going to try this out!!

    I’ve been using the `vitest bench` command; being able to slap a `.bench.ts` file next to a module and go to town is convenient: https://vitest.dev/guide/features.html#benchmarking

    • cybice 6 hours ago

      Hi! In your benchmark, do you use a fixed number of iterations to stop the test, or do you apply a statistical criterion, such as the Student's t-test, to determine when to stop?

      • evnwashere 6 hours ago

        i didn’t want it to be complex so it uses simple time budget + at least x amount of samples, both and more can be configured with lower level api.

        in practice i haven’t found any js function that gets faster after mitata’s time budget (excluding cpu clock speed increasing because of continuous workload)

        another problem is garbage collection can cause long pauses that cause big jumps for some runs, thus causing loop to continue searching for best result longer than necessary

      • izakfr 6 hours ago

        This is awesome! I’ve been working on optimizing a javascript library recently and am feeling the pain of performance testing. I’ll check this out.

        • kookamamie 7 hours ago

          Is it by accident that "mitata", the project name, means "to measure" in Finnish?

          • evnwashere 7 hours ago

            it was hand picked for exactly that :)

          • steve_adams_86 3 hours ago

            Hey, I wrote about this once! I use it a ton. Thanks for your work. I can’t wait to dig into 1.0.

            • moltar 8 hours ago

              Any plans for web compatible output?

              I maintain this repo, and we hand roll the stats page, but if we could get that for free it’d be so great!

              https://github.com/moltar/typescript-runtime-type-benchmarks

              • evnwashere 7 hours ago

                I have been thinking of reusing/creating something like https://perf.rust-lang.org/ that lets you pick and compare specific hash/commit with all data from json format

              • wonger_ 8 hours ago

                This is for "headless" JavaScript outside the browser, right?

                • tbeseda 6 hours ago

                  Never heard JS called "headless". Not sure I like it.

                  edit: all JS is "headless". almost all languages are headless. _Software_ can be headless or have a GUI. but languages are naturally headless.

                  • Waterluvian 6 hours ago

                    Headless browsers. I guess this is a very closely related concept.

                    • blovescoffee 6 hours ago

                      There’s a lot of server side js. Mostly plumbing code but there’s certainly “headless” js

                      • tbeseda 5 hours ago

                        I'm very aware of JS run on servers. And I knew that's what OP meant. I'm saying I'm not sure I like the usage. Maybe it's a generational dev vocabulary thing... I prefer "browser" or "client" JS vs "server" or backend

                    • evnwashere 8 hours ago

                      It works anywhere where javascript works, so you can easily run it in browser too. Tho idea of making jsbench like website but with mitata accuracy (+ dedicated runners) keeps bugging me.

                    • pavi2410 7 hours ago

                      wow! what a timing! I started building Speedrun yesterday to accommodate my daily needs

                      https://toolkit.pavi2410.me/tools/speedrun

                      https://github.com/pavi2410/toolkit/issues/8

                    • golergka 7 hours ago

                      Wow, I was just looking how to benchmark a streaming JSON parser that I'm working on! I'm creating it specifically for performance-intensive situations with JSON strings sizes up to gigabytes, and I thought that I had to implement about half of the features you mention there, like parametrisation and automatic GC after every test.

                      • dumbo-octopus 6 hours ago

                        When you say streaming JSON parser, do you mean that it outputs a live “observable” object as it is steaming, or that it just doesn’t keep the entire source data in memory? I’ve done some work for the former for displaying rich LLM outputs as they are delivered - it’s a surprisingly underexplored area from what I’ve seen.