45% slower seems pretty decent considering they use a wasm kernel they developed to mimic the unix kernel so they can run non-modified unix programs inside the browser. It's actually pretty impressive that they did this, and even more impressive that it works and like another commentator said, is not even an order of magnitude slower.
I'm more interested in 1) usages of wasm in the browser that don't involve running unmodified unix programs and 2) wasm outside the browser for compile-once-run-anywhere usecases with sandboxing / security guarantees. Could it be the future for writing native applications?
Languages like Kotlin, C#, Rust, as well as C/C++ etc support wasm quite well. Could we see that be a legitimate target for applications in the future, if the performance gap was closer to 10%-ish? I would personally prefer running wasm binaries with guaranteed (as much as possible ofc) sandboxing compared to raw binaries.
edit: it's from 2019, there have been significant improvements made to wasm since then.
> wasm outside the browser for compile-once-run-anywhere usecases with sandboxing / security guarantees
I've been using it this way for DecentAuth[0]. It's awesome. I compile a single native codebase to wasm, and I can use my library from JS, Go, or Rust. New host languages only require about 1000 lines of glue. I don't have to worry at all about building for different architectures.
wasm outside the browser for compile-once-run-anywhere usecases with sandboxing / security guarantees
Please just use Docker in a microVM or whatever. It's 0% slower and 100% more mature.
> Please just use Docker in a microVM or whatever. It's 0% slower and 100% more mature.
Wasm has different characteristics than docker containers and as a result can target different use cases and situations. For example, Imagine needing plugins for game mods or an actor system, where you need hundreds of them or thousands, with low latency startup times and low memory footprints and low overheads. This is something you can do sanely with wasm but not with containers. So containers are great for lots of things but not every conceivable thing, there’s still a place for wasm.
yeah, I mostly see it competing with Lua and small function execution in a safe sandbox (e.g. similar scope as eBPF). and maybe for locking down problematic stuff that isn't ultra performance sensitive, like many drivers.
so agreed, plugins. in games or in the kernel.
But way more difficult and with a much higher attack surface area.
And also, it's not necessarily apples to apples. It would be nice to be able to drop a compiled WASM module into your codebase and use it from just about any language on the backend. You could reuse a lot of code that way across different services without the overhead of spinning up yet another container. And you could potentially even run untrusted code in a sandboxed way.
Getting an end user to set up and run docker to run an app is a non starter for most things.
does that allow me to do GPU and real-time audio work on windows and macos
not only is this a completely different use case, it's not even true:
https://stackoverflow.com/questions/60840320/docker-50-perfo...
tl/dr: libseccomp version used in combination with docker's default seccomp profile.
More discussion here https://github.com/moby/moby/issues/41389
Setting up docker and a microVM is orders and orders of magnitude harder and less ergonomic then using your browser. These are not at all interchangeable.
wasm outside the browser
That it’s not even an order of magnitude slower sounds actually pretty good!
45% slower to run everywhere from a single binary...
I'll take that deal any day!
45% slower to run everywhere from a single binary... with less security holes, without undefined behavior, and trivial to completely sandbox.
Its definitely a good deal!
> without undefined behavior
Undefined behaviour is defined with respect to the source language, not the execution engine. It means that the language specification does not assign meaning to certain source programs. Machine code (generally) doesn't have undefined behaviour, while a C program could, regardless of what it runs on.
Native code generally doesn't have undefined behaviour. C has undefined behaviour and that's a problem regardless of whether you're compiling to native or wasm.
That which is old is new again. The wheel keeps turning…
“Wait we can use Java to run anywhere? It’s slow but that’s ok! Let’s ride!”
There's a reason Java applets got deprecated in every browser. The runtime was inherently insecure. It just doesn't work for the web.
Also, targeting the JVM forces you to accept garbage collection, class-based OO and lots of pointer chasing. It's not a good target for most languages.
Java's pretty good, but wasm is actually a game changer.
I am a huge, huge fan of wasm. The first time I was able to compile a qt app to Linux, windows, Mac, and wasm targets, I was so tickled pick it was embarrassing. Felt like I was truly standing on the shoulders of giants and really appreciated the entirety of the whole “stack” if you will.
Running code in a browser isn’t novel. It’s very circular. I actually met someone the other day that thought JavaScript was a subset of Java. Same person was also fluent in php.
Wasm is really neat, I really love it. My cynical take on it is that, at the end of the day, it’ll just somehow help ad revenue to find another margin.
Fair. Running in the browser isn't novel, but JS/TS are some of the most popular languages in history and that almost certainly never would have happened without monopolizing the browser.
Expanding margins are fine by me. Anticompetitive markets are not. My hope is that wasm helps to break a couple strangleholds over platforms (cough cough iOS cough Android)
I really don’t think Apple is going to let anyone get away with too much browser appifying of iOS.
Is compiling so hard?
(2019) Popular in:
2019 (250 points, 172 comments) https://news.ycombinator.com/item?id=20458173
2020 (174 points, 205 comments) https://news.ycombinator.com/item?id=19023413
45% slower means..?
Suppose native code takes 2 units of time to execute.
“45% slower” is???
Would it be 45% _more time?_
What would “45% _faster_” mean?
What looks like the relevant table has a summary line saying "geometric mean: 1.45x" so I think that in this case "45% slower" means "times are 1.45x as long".
(I think I would generally use "x% slower" to mean "slower by a factor of 1+x/100", and "x% faster" to mean "faster by a factor of 1+x/100", so "x% slower" and "x% faster" are not inverses, you can perfectly well be 300% faster or 300% slower, etc. I less confidently think that this is how most people use such language.)
It’s a fair point, that way of expressing it is always a bit confusing. Is it the original time plus 45%? Is it 45% of the original speed?
I think it is easier to understand in terms of throughput.
So 45% less work per unit of time, so 55% of the work.
0% slower means "the same speed." The same amount of seconds.
10% slower means "takes 10% longer." 10% more seconds.
So 45% slower than 2 seconds is 1.45 * 2 = 2.9 seconds.
I guess it is clearer if expressed like "Native application took only x% of WASM equivalent".
The data here is interesting, but bear in mind it is from 2019, and a lot has improved since.
This is pretty good actually considering the low hanging optimizer optimizations left and that the alternative is JS which generally performs 2-10x slower.
I think vectorization support will narrow the aggregate difference here as a lot of SPEC benefits from auto vectorization if I recall correctly.
(2019)
Yeah, I've seen this when test Rust code compiled into native and wasm. I don't know about 45% though, I haven't measured it.
... in browsers. Which at best JIT compile. There are several WASM runtimes that AOT compile and have significantly better performance (e.g. ~5-10% slower).
The title is highly misleading.
It’s not misleading to measure the performance of WebAssembly in a web browser.
Yeah, but it's specifically testing things that implement against a posix API (because generally that's what "native" apis do (omiting libc and other os specific foundation libraries that are pulled in at runtime or otherwise) I would suspect that if the applications that linked against some wasi like runtime it might be a better metric (native wasi as a lib/vs a was runtime that also links) mind you that still wouldn't help the browser runtime... But would be a better metric for wasm (to native) performance comaparison.
But as already mentioned we have gone through this all before. Maybe we'll see wasm bytecodes pushed through silicon like we did the Jvm... Although perhaps this time it might stick or move up into server hardware (which might have happened, but I only recall embedded devices supporting hardware level Jvm bytecodes).
In short the web browser bit is omitted from the title.
Just means the browsers can catch up.
Initially slower but then faster after full compilation
Browsers have been doing (sometimes tiered) AOT compilation since wasm inception.
could you please name them?