I recently ported WebKit's libpas memory allocator[1] to Windows, which used pthreads on the Linux and Darwin ports. Depending on what pthreads features you're using it's not that much code to shim to Windows APIs. It's around ~200 LOC[2] for WebKit's usage, which a lot smaller than pthread-win32.
[1] https://github.com/WebKit/WebKit/pull/41945 [2] https://github.com/WebKit/WebKit/blob/main/Source/bmalloc/li...
At the time (11 years ago) I wanted this to run on Windows XP.
The APIs you use there (e.g. SleepConditionVariableSRW()) were only added in Vista.
I assume a big chunk of pthread emulation code at that time was implementing things like that.
These VirtualAlloc's may intermittently fail if the pagefile is growing...
Ah yeah, I see Firefox ran into that and added retries:
https://hacks.mozilla.org/2022/11/improving-firefox-stabilit...
Seems like a worthwhile change, though I'm not sure when I'll get around to it.
This is something you also need to do for other Win32 APIs, e.g. file write access may be temporarily blocked by anti-virus programs or whatever and not handling that makes unhappy users.
Never knew about the destructor feature for fiber local allocations!
I'm a big fan of pigz, I discovered it 6 years ago when I had some massive files I needed to zip and and 48 core server I was underutilizing. It was very satisfying to open htop and watch all the cores max out.
Edit: found the screenshot https://imgur.com/a/w5fnXKS
that was a big big file indeed
Very old post, needs 2013 in the title
https://web.archive.org/web/20130407195442/https://blog.kowa...
Seems to be updated, no?
Not much. The only non-cosmetic difference is:
-Premake supports Visual Studio 2008 and 2010 (and 2012 supports 2010 project files via conversion).
+Premake supports latest Visual Studio 2018 and 2022 project files via conversion).
I'm not sure how willing I'd be to trust a pthread library fork from a single no-name github person. The mingw-w64 project provides libwinpthread, which you can download as source from their sourceforge, or as a binary+headers from a well-known repository like msys2.
> Porting pthreads code to Windows would be a nightmare.
Porting one application using pthreads to use the Win32 API directly is however a lot more reasonable and provides you more opportunity to deal with impedance mismatches than a full API shim has. Same goes for dirent and other things as well as for the reverse direction. Some slightly higher level abstraction for the thnings your program actually needs is usually a better solution for cross-platform applications than using one OS API and emulating it on other systems.
Worth mentioning that this is only of interest as technical info on porting process.
The port itself is very old and therefore very outdated.
Perhaps it's worth it adding this as a note at the top of the post, maybe mentioning alternatives, such as an Actually Portable™ build of `pigz`[1] or just a windows build of zstd[2].
I don't think the port itself is very old. The latest version of original pigz seems to have been released in 2023 [1], and the port seems to be of pigz from around that time[2]
[1] - https://zlib.net/pigz/
Pigz? Good old Pigzip? :)
https://pc-freak.net/files/hackles.org/cgi-bin/archives.pl%3...
I don't see any relation. Pigz is a multithreaded reimplenentation of gzip (drop in replacement)
I wish premake could gain more traction. It is the comprehensible alternative to Cmake etc.
I'd rather everyone use CMake than have to deal with yet another build system. Wouldn't be so bad if build systems could at least agree on the user interface and package registry format.
Xmake[0] is as-simple-as-premake and does IIRC everything Premake does and a whole lot more.
It's 2025, just use meson
Completely useless in an airgapped environment
Could you elaborate on that?
I'm guessing it needs internet for everything and can't work with local repositories.
Not really a fan of Meson but I doubt that that's the case as it is very popular in big OSS projects and distributions aren't throwing a fit.
The best kind of porting - other people have already done most of the work for you!
Repository link: https://github.com/kjk/pigz
This is clearly aimed at faster results in a single user desktop environment.
In a threaded server type app where available processor cores are already being utilized, I don't see much real advantage in this --- if any.
depends on the current load. i've worked places where we would create nightly postgres dumps via pg_dumpall, then pipe through pigz to compress. it's great if you run it when load is otherwise low and you want to squeeze every bit of performance out of the box during that quiet window.
this predates the maturation of pg_dump/pg_restore concurrency features :)
Not to over state it, embedding the parallelism into the application drives to the logic "the application is where we know we can do it" but embedding the parallelism into a discrete lower layer and using pipes drives to "this is a generic UNIX model of how to process data"
The thing with "and pipe to <thing>" is that you then reduce to a serial buffer delay decoding the pipe input. I do this, because often its both logically simple and the component of serial->parallel delay deblocking on a pipe is low.
Which is where xargs and the prefork model comes in, because instead you segment/shard the process, and either don't have a re-unification burden or its a simple serialise over the outputs.
When I know I can shard, and I don't know how to tell the appication to be parallel, this is my path out.