You can do this with ssh (and socat or mkfifo):
# receiver
socat UNIX-RECV:/tmp/foobar - | my-command
# sender
my-command | ssh host socat - UNIX-SENDTO:/tmp/foobar
You can relay through any other SSH server if your target is behind a firewall or subject to NAT (for example the public service ssh-j.com). This is end-to-end encrypted (SSH inside SSH): # receiver
ssh top-secret@ssh-j.com -N -R ssh:22:localhost:22
socat UNIX-RECV:/tmp/foobar - | my command
# sender
my-command | ssh -J top-secret@ssh-j.com ssh socat - UNIX-SENDTO:/tmp/foobar
(originally posted on the thread for "beam": https://news.ycombinator.com/item?id=42593135)This doesn't do most of what dumbpipe claims to do: it doesn't use QUIC, doesn't avoid using relays when possible, doesn't pick a relay for you, and doesn't keep your devices connected as network connections change. It also depends on you doing the ssh key management out-of-band, while dumbpipe appears to put the keys into random ASCII strings.
WireGuard is more similar.
Wireguard doesn't do most of those either
That's true, just some.
First sentence after following the link this topic is about:
Dumb pipe punches through NATs, using on-the-fly node identifiers. It even keeps your machines connected as network conditions change.
You can simplify things even more by running https://www.tarsnap.com/spiped.html
It doesn't even assume ssh.
Similar with iroh.
You could also set up a wg server, have both clients connect to it and then pass data between the two IPs. There's still a central relay passing data around, NAT or no NAT.
After getting burnt on wireguard a few times now, I'm not keen on using it anymore.
I want less magic, not more impenetrable ip table rulesets (in linux at least).
Having run servers on OpenVPN, IPSec and Wireguard.. Wireguard is very mundane software.
I still get the chills at the deep and arcane configuration litanies you have to dictate over calls to get a tunnel configured. And sometimes, if you had to integrate different implementations of IPSec with each other, it just wouldn't work and eventually you'd figure out that one or two parameters on one side are just wrong.
And if you don't want to manage IPTables/nftables manually to firewall the traffic from the VPN (which is ugly, I agree), ufw or firewalld introduced forwarding rule management (route, and policies) recently.
Yes, the initial setup and troubleshooting of IPSec can be a nightmare. I've spent hours on bridges with people getting it up and running properly.
Wireguard is a damn simple breath of fresh air. There's so little to configure and go wrong. The mental model took a little bit of time click for me (every endpoint is a peer, it's not client/server) but after that it was a breeze.
Wireguard is so much simpler than those other options. IPSec is a mess.
Interested to know how you've been burnt by wireguard; what did it not do that you were expecting? What failures have you experienced with it that were the fault of wireguard?
I've been using it (fairly simply, mind you) and it's been pretty solid for a number of years, and was as administrative relief in comparison to OpenVPN which I'd been using before wireguard existed. Single UDP port usage makes me query your comment about impenetrable IP table rulesets.
(OpenVPN was great for it's time too, the sales reps at the company where I introduced it loved the ability to work from the road, way back early 2000's)
"Interested to know how you've been burnt by wireguard; what did it not do that you were expecting?"
Speaking just for myself, I expected it to be as easy to set up as Tailscale. Not to be set up in exactly the same manner as Tailscale, I understand they are not identical technologies, but I expected the difficulty to be within spitting distance of each other.
Instead I fussed with Wireguard for a few days without it ever working for even the simplest case and had Tailscale up and running in 5 minutes.
I think I recognize the pattern; it's one that has plagued Linux networking in general for decades. The internet is full of "this guy's configuration file that worked once", and then people banging on that without understanding, and the entire internet is just people banging on things they don't understand, 80% of which are for obsolete versions of obsolete features in obsolete kernels, until the search engines are so flooded with these things that if there is a perfect and beautiful guide to understanding exactly how this all works together and gives the necessary understanding to fix the problems yourself it's too buried to ever find. It also doesn't help that these networking technologies are some of the worst when it comes to error messages and diagnosis. Was I one character away from functionality, or was my entire approach fundamentally flawed and I was miles from it working? Who's to say, it all equally silently fails to work in the end.
Out of curiosity, what references were you looking at for the setup?
I mistyped that. It was tailscale not wireguard.
Tailscale changes your dns lookups, adds a bunch of iptables, and then unfortunately broke features without adding them to the changelog (because security I guess).
While wireguard has more of a maintenance overhead tracking public and private keys and ip addresses, it does less magic -- and I really just want things to work these days.
Never knew about ssh-j.com. Neat.
The approach you describe requires host to have an open ssh port you can access. quic + nat hole punching works around this.
you need an ssh server and an open port, different protocol etc
Every time someone calls a product “dumb,” I get a little excited, because it usually means it’s actually smart. The internet is drowning in “smart” stuff that mostly just spies on you and tries to sell you socks. Sometimes, I just want a pipe that does what it says on the tin; move my bits, shut up, and don’t ask for my mother’s maiden name.
Dumb is now 'we don't steal your data'
> and tries to sell you socks
I've been writing raw POSIX net\ code today. A lot of variables shorten "socket" to "sock". And my brain was like.. um, bad news! This is trying to sell us on their special sock(et)s!
I thought it was quite a fun pun for the same reason.
But what about the enterprise ready AI features so that they can train on your data?
Somewhat relevant, I have a list of (mostly browser based + few no-setup cli) tools [1] to send files from A to B. I keep sharing this list here to fish more tools whenever something like this comes up.
[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...
I love LocalSend for quick transfers between your own devices, just werks on every OS.
One limitation of iOS is the inability to use Bluetooth to transfer an image/video file to a Bluetooth receiver such as Windows. The Apple documentation requires a wired connection. https://support.apple.com/en-ca/120267
If LocalSend is running on iOS and Windows does LocalSend have the ability to send photos?
It should work, though I haven't actually tried it. That's not a limitation of iOS, just Apple's own syncing app/protocol. LocalSend is basically an http client/server with network device discovery, as far as I know.
> If LocalSend is running on iOS and Windows does LocalSend have the ability to send photos?
Yes, I use it all the time.
Both devices need to be on the same network (LAN / WiFi), however. LocalSend does not use Bluetooth.
Recently there is this project that caught my attention. The project claim to support multi different protocol, on various web browser(even IE6), and extremely easy to setup(single python file). I have not given it a try, just want to share.
same team behind dumbpipe makes sendme, which is much closer to this use case! https://github.com/n0-computer/sendme
I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and have them communicate/transfer files. With same protocol in all OSes, of course. Seems like it should have been one of the first features USB could have had since the beginning, imho
I know there's something about USB A to USB A cables not existing in theory, but this would have been a good reason to have it exist, and USB C of course can do this
Also, Android to PC can sort of do it, and is arguably two computers in some form (but this was easier when Android still acted like a mass storage device). But e.g. two laptops can't do it with each other.
You actually can connect two machines via USB-C (USB4 / Thunderbolt) and you get a network connection.
You only get Link-Local addresses by default, which I recall as somewhat annoying if you want to use SSH or whatever, but if you have something that does network discovery it should probably work pretty seamlessly.
See https://christian.kellner.me/2018/05/24/thunderbolt-networki... or https://superuser.com/a/1784608
You only get Link-Local addresses by default
The same thing happens with two machines connected via an Ethernet cable, which appears to be what this USB4 network feature does - an Ethernet NIC to software, but with different lower layer protocols.
Crossover cables, get'cher crossover cables here!
AIUI, most NICs these days do what is called "auto-crossover"; i.e., they'll detect the situation and just do the "crossover" in the NIC itself. A normal cable works.
Yes, the name is Auto MDI-X and is standard since on-board 1Gbps Ethernet NICs became the norm.
https://en.wikipedia.org/wiki/Medium-dependent_interface#Aut...
ssh is fine:
ssh fe80::2%eth0
where fe80::2 is the peer's address, and eth0 is the local name of the interface they're on.Unfortunately browsers have decided that link-local is pointless and refuse to support it, so HTTP is much more difficult.
Non-USB-shaped older Thunderbolt, down to version 1, can do this too, iirc. But you do need the expensive and somewhat rare cable.
The incredible technology you're describing was possible on the Nintendo DS without wires and no need for a LAN either. It's a problem that's been solved in hundreds of different ways over the last 40 years but certain people don't want that problem to ever be solved without cloud services involved.
This dumb pipe thing is certainly interesting but it will run into the same problem as the myriad other solutions that already exist. If you're trying to give a 50MB file to a Windows user they have no way to receive it via any method a Linux user would have to send it unless the Windows user has gone out of their way to install something most people have never heard of.
> It's a problem that's been solved in hundreds of different ways over the last 40 years
If we put the requirements of,
1. E2EE
2. Does not rely on Google. (Or ideally, any other for profit corporation.)
That eliminates like 90% of the recent trend of WebRTC P2P file transfer things that have graced HN over the last decade, as all WebRTC code seems to just copy Google's STUN/TURN servers between each other.But as you say,
> but certain people don't want that problem to ever be solved without cloud services involved.
ISPs seem to be that in set. IPv6 would obsolete NAT, but my ISP was kind enough to ship an IPv6 firewall that by default drops incoming packets. It has four modes: drop everything, drop all inbound, a weird intermediate mode that is useless¹, and allow everything.
(¹this is Verizon fios; they claim, "This feature enables "outside-to-inside" access for IPv6 services so that an "outside" Internet service (gaming, video, etc.) can access a specific "inside" home client device & port in your local area network."; but the feature, AFAICT, requires the external peer's address. I.e., I need to know what my roaming IP will be before I leave the house, somehow, and that's obviously impossible. It seems utterly clearly slapped on to say "it comes with a firewall" but was never used by anyone at Verizon in the real world prior to shipping…)
starlink doesn't even give you publicly routable ipv6 unless you bypass the starlink router.
My starlink is such that i cannot install/set up things like pfsense/opnsense because the connection drops sometimes, and when either of those installers fail, they fail all the way back to "format the drive y/n?" Also, things like ipcop and monowall et al don't seem to support ipv6.
I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
> starlink doesn't even give you publicly routable ipv6 unless you bypass the starlink router.
If you've not got an Internet[-routable] address, are you truly connected to the Internet?
> I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
TBH, I would think that this is just enabling v6 forwarding. That wouldn't do RA or DHCP, I don't think, but I don't think you'd want that, either. (That would be the responsibility of the upstream network.)
You would want that. The upstream network can't do it for you, because RAs can't be routed. Same deal for DHCPv6 (although personally I'd say you can probably skip that and just use SLAAC).
in order to have public ipv6 on starlink you need to manage the /56 they delegate to you into however many /64s that is (at least 8); i tested it with a store bought router, everything worked if you can do PD with DHCP[v6] or whatever. I returned the router because it was $200 and i will eventually figure it out on a VM.
It's pretty simple with systemd-networkd:
# On the upstream network.
[Network]
DHCP=yes
[DHCPv6]
PrefixDelegationHint=::/56
# On each downstream network.
[Network]
IPv6SendRA=yes
DHCPPrefixDelegation=yes
If you don't want systemd-networkd, look at https://wiki.debian.org/IPv6PrefixDelegation#Using_ifupdown_.... Firewalling is the same as v4, just without the NAT.One frustrating part is that as far as I can tell nothing supports easy downstream DHCPv6-PD delegation, so machines on the downstream network that want their own prefix won't be able to get one automatically. OpenWRT's network config daemon supports it, but nothing on regular Linux does.
> however many /64s that is (at least 8);
256!
Pairdrop.net - no need to install anything, transfers go over the local network if both devices are in a LAN.
I mean, windows users install things they’ve never heard of all the time.
If this was a real thing you needed to do, and it is too much work to get them to install WSL, you could probably just send them the link to install Git and use git bash to run that curl install sh script for dumbpipe.
And if this seemed like a very useful thing, it couldn’t be too hard to package this all up into a little utility that gets windows to do it.
But alas, it remains “easier” to do this with email or a cloud service or a usb stick/sd card.
> It's a problem that's been solved in hundreds of different ways over the last 40 years
I guess now you can find the solution that you need by telling the requirements to LLMs who have now indexed a lot of the tradeoffs
USB is asymmetric - there's a host and a device, and the latter acts as a polled slave.
The use-case of a wired connection between two PCs was already solved years before USB --- with Ethernet.
there are USB 2.0 (and probably 1.x) devices with usb-A on both sides and a small box in the middle that acts as a network crossover between two machines, i've seen them in stores. I've never used one because i know how to set CIDR. And, as others have mentioned, this does just work with usb-c.
Like so many possible networking/connection nice things that we can't have, you really can directly blame this one on "the companies."
Brought to you by the same people that made "peer-to-peer" a dirty word.
"I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and them communicate/transfer files."
After TCP/IP became standard on personal computers, I used Ethernet crossover cable to transfer large files between compuers. I always have some non-networked computers. USB sticks were not yet available.
Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Much has changed over the years. Expect replies about those changes. There are many, many different ways to transfer files today. Expect comments advocating those other methods. But the crossover cable method still works. With a USB-to-Ethernet adapter it can work even on computers with no Ethernet port. No special software is needed. No router is needed. No internet is needed. Certainly no third party is needed. Just TCP/IP which is still a standard.
> Today the Ethernet port is removed from many personal computers
Pretty sure one can set up an ad hoc wifi network for this.
Not on Windows 11 you can't. They removed that for...reasons. They also removed the lovely hosted network that was added with 7 (Vista?) so now you can't network two modern Windows devices without something else (physical cable, or a non-Windows or older Windows device for hosting a network). Stuck with a low speed Wi-Fi router and USB 2 cables? It's gonna take you hours to make that one-time 200gb transfer, unless you wanna drag it down the stairs (the only USB 3 cables I own are mini USB 3 cables for use with an older external hard drive that I no longer own, all my USBC cables are USB 2/PD only...I think...).
One … can. I have a script for this myself, but I only set that up after wanting to do ad hoc and then realizing that it was basically impossible to do from scratch. Ad hoc requires an Internet connection to download the knowledge necessary to do ad hoc, and that utterly defeats the point of it all. (Except in how I've now cached that into a script.)
Ad hoc requires the machines be in "WiFi shouting range".
I was about to talk about how online help files are forgotten these days, and should guide you to the right information to set up an ad-hoc network, but I was disappointed three times over by macOS.
macOS does not have any offline documentation like pretty much every OS used to. When I turn off my WiFi and then open "Mac User Guide" or "Tips for your Mac", they both tell me they require an internet connection.
When I re-enable my internet connection, neither of those apps have information about how to set up an ad-hoc wifi network.
When I looked up how to create an ad-hoc network in other sources, I discovered that the ability to create an ad-hoc network was apparently removed from the GUI in macOS 11, and now requires CLI commands.
I hate how modern tech companies assume that everybody always has access to a high speed internet connection.
I hate how modern tech companies assume that everybody always has access to a high speed internet connection.
I suspect it's deliberate, especially when said company also sells cloud services.
Oh so you bought two computers at a store with the operating system preinstalled and have never connected them to the Internet? And you have no Internet access whatsoever to look things up for your two 100% air gapped computers?
I’m not buying it.
That's sort of a disingenuous phrasing, but yes. I'm not thinking of them as "air gapped", since I'm intentionally attempting to form an ad hoc WiFi network between them, but yes, until two laptops are connected over a network, yeah, they're effectively "air gapped" I suppose.
They have normal, consumer OSes on them. Whatever one might reasonably already have preinstalled.
I'm sitting at an macOS machine presently. If I poke around the Wi-Fi menu, and the Wi-Fi settings … IDK, I come up empty handed.
So let's cheat, and Google it. But the entire point of my post above is that needing to Google it defeats the point; if I have an Internet connection (which would be required to Google something) — I can just network the various machines using that Internet connection. In every situation I've wanted to form an ad hoc network, it is because I do not have any access to the Internet, period, but I still have the need to network two machines together.
Anyways, Gemini's answer:
> To set up an ad-hoc Wi-Fi network on macOS, you can use the "Create Network" option in the Wi-Fi menu.
Apparent hallucination, since there is no such menu item.
The first result says the same thing:
> 1. Click the wifi icon on the menu bar. 2. Click “Create network. . .”
(… I suppose I see where the training data came from).
The next result is a reddit thread; the thread is specifically about ad hoc WiFi. The only answer is a link to a macOS support article; that article tells us to go to General → Sharing, and use "Internet Sharing". But AFAICT, that's for sharing an existing WiFi connection over a secondardy medium: i.e., if you have WiFi, you could share that connection over a TB cable, or some other wired medium. And "To Devices Using" conspicuously lacks "also over WiFi", or similar. I.e., this also isn't what we're looking for.
The rest of the results are mostly all similarly confused, and I've given up.
So even if I had Internet, … I still can't do it. So if I'm actually in a situation where I need an ad hoc, it definitely isn't happening.
> if I have an Internet connection (which would be required to Google something) — I can just network the various machines using that Internet connection
Wow, tell me you don’t know how computer networks work without telling me you don’t know how computer networks work.
I think we are done here.
What?
I think there must be some misunderstanding? I think deathanatos just wants an easy way to send files between computers when the internet is down, which seems decently reasonable.
This is super easy in Linux with NetworkManager. I assume other OS's have simple hotspot functionality?
I never got around to installing NM in Linux. wpa_supplicant on its own is just … mostly good enough.
Perhaps that's mea culpa, and I suppose perhaps I should try NM again, but I also sort of thought this wouldn't be rocket science, until I tried to do it and failed.
> Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Oh come on, this isn't a conspiracy. For the last decade, every single laptop computer I've used has been thinner than an ethernet port, and every desktop has shipped with an ethernet port. I think the last few generations of MacBook Pros (which were famously thicker than prior generations) are roughly as thick as an ethernet port, but I'm not sure it'd practically fit.
And I know hacker news hates thin laptops, but most people prefer thin laptops over laptops with ethernet. My MacBook Air is thin and powerful and portable and can be charged with a USB-C phone charger. It's totally worth it for 99% of people to not have an ethernet port.
> thinner than an ethernet port
The XJACK and similar designs have been around long enough they can vote
You used to be able to connect two PC’s together via the parallel port. I had to do this once to re-install Windows 95 on a laptop with a hard drive and floppy. It was painfully slow but it worked.
https://en.wikipedia.org/wiki/IEEE_1284#Characteristics
Up to 2MB/s effective throughput, better than 10M Ethernet. Likely it was slower for you due to other limitations.
I believe this was pioneered by Laplink[0].
Yup. Used to use this a lot in the DOS era. Copying files between LPT or COM ports. Slow, but worked without too much hassle.
or [x|z]modem ?
> You used to be able to connect two PC’s together via the parallel port.
This could be done on Amiga too, using parnet https://crossconnect.tripod.com/PARNET.HTML
I recall it being easier to set up than a dialup modem (since the latter also required installing a TCP/IP stack)
On Linux you can do it by creating an MTP endpoint, like mobile devices do https://github.com/viveris/uMTP-Responder
It looks like MS also had one, but only on Windows CE for some reason https://www.microsoft.com/en-us/download/details.aspx?id=933...
Or an rndis gadget
You can plug an ethernet cable in between machines and send files over it! So that period where this would be useful already had a pretty good solution (I vividly remember doing this like 3 times in the same day with some family members for some reason (probably nobody having a USB drive at the moment!))
FireWire did the IIRC. When buying a new Mac you would connect them via a single cable to do the data transfer.
Macs still have target disk mode but it requires rebooting. Highly recommend using thunderbolt to transfer over to a new computer!
IIRC Apple computers can be put into Target Disk Mode, which lets a host computer rifle through its contents as if it is a dumb disk drive
This requires shutting down one computer (the mac) first, though.
I realize you are asking for cross-OS, but Mac OS X was doing this in 2002 (and probably earlier) for PowerBook models with an ethernet cable between them. As I recall, iBooks didn't do this even if they had the port, but PowerBooks would do the auto-crossover, then Finder/AFP would support the machines showing up for each other.
I actually have a USB-A to USB-A cable. It came with priority Windows software on an 80mm CD-ROM. It wasn't long enough to connect two desktops in the same room if not on the same table, and I just never tried with a laptop because all my laptops have run Debian or some variant thereof since 2005 or so.
The USB 3.0 spec does actually support A to A cables, but I'm not sure if any software makes use of it.
You mean a cross ethernet cable?
Or using Bluetooth? Or using local WiFi (direct or not).
> You mean a cross ethernet cable?
If both machines have an Ethernet port.
> Or using Bluetooth?
Half the time I need a dumb pipe, it's from personal to work. Regrettably, work forces me to use macOS, and macOS's bluetooth implementation is just an utter tire fire, and doesn't work 90% of the time. I usually fall back to networks, for that reason.
Of course, MBPs also have the "no port" problem above.
> Or using local WiFi (direct or not)
If I'm home, yeah. But TFA is advertising the ability to hole-punch, and if I'm traveling, that'd be an advantage.
ethernet works out of the box, i used local lans long time before i knew how to program
usb probably works too if you google a bit
But then nobody could analyze your files... :/
> In the iroh world, you dial another node by its NodeId, a 32-byte ed25519 public key. Unlike IP addresses, this ID is globally unique, and instead of being assigned,
ok but my network stack doesn't speak nodeID, it speaks tcp/ip -- so something has to resolve your public keys to a host and port that I can actually connect to.
this is roughly the same use case that DNS solves, except that domain names are generally human-compatible, and DNS servers are maintained by an enormous number of globally-distributed network engineers
it seems like this system rolls its own public key string to actual IP address and port mapping/discovery system, and offers a default implementation based on dns which the authors own and operate, which is fine. but the authors kind of hand-wave that part of the system away, saying hey you don't need to use this infra, you can use your own, or do whatever you want!
but like, for systems like this, discovery is basically the entire ball game and the only difficult problem that needs to be solved! if you ignore the details of node discovery and name mapping/resolution like this, then of course you can build any kind p2p network with content-addressable identifiers or whatever. it's so easy a cave man can do it, just look at ipfs
We do use DNS, but we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised.
And, as somebody else remarked, the ticket contains the direct IP addresses for the case where the two nodes are either in the same private subnet or publicly reachable. It also contains the relay URL of the listener, so as long as the listener remains in the same geographic region, dumbpipe won't have to use node discovery at all even if the listener ip changes or is behind a NAT.
we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised
if users access that bittorrent mainline DHT thru a third party server then it's obviously not decentralized, right? that server is the central point to which clients delegate trustIn practice, the "ticket" provided by dumbpipe contains your machine's IP and port information. So I believe two machines could connect without any need for discovery infra, in situations that use tickets. (And have UPnP enabled or something.)
OK so given
$ ./dumbpipe listen
...
To connect use: ./dumbpipe connect nodeecsxraxj...
that `nodeecsxraxj...` is a serialized form of some data type that includes the IP address(es) that the client needs to connect to?forgive me for what is maybe a dumb question, but if this is the case, then what is the value proposition here? is it just the smushing together of some IPs with a public key in a single identifier?
The value proposition of the ticket is that it is just a single string that is easy to copy and paste into chats and the like, and that it has a stable text encoding which we aim to stay compatible with for some time.
We have a tool https://ticket.iroh.computer/ that allows you to see exactly what's in a ticket.
a URL is also a single string that's easy to copy and paste, the question I have is how these strings get resolved to something that I can connect to
if you need to go thru a relay to do resolution, and relays are specified in terms of DNS names, then that's not much different than just a plain URL
if the string embeds direct IPs then that's great, but IPs are ephemeral, so the string isn't gonna be stable (for users) over time, and therefore isn't really useful as an identifier for end users
if the string represents some value that resolves to different IPs over time (like a DNS entry) but can be resolved via different channels (like thru a relay, or via a blockchain, or over mdns, or whatever) then that string only has meaning in the context of how (and when) it was resolved -- if you share "abcd" with alice and bob, but alice resolves it according to one relay system, and bob resolves it according to mdns, they will get totally different results. so then what purpose does that string serve?
The value prop is that dumbpipe handles encryption, reconnection, UPnP, hole punching, relays, etc. It's not something I could easily replicate with netcat, for example.
ngrok and tailscale and lots of other services offer all of these capabilities, the only unique thing of this one seems to be the opaque string identifiers + some notion of "decentralization" which is what I'm trying to understand, particularly in the realm of how discovery works
I wonder how much reimplementation there is between this and Tailscale, as it seems like there are many needs in common. One would think that there are already low level libraries out there to handle going through NATs, etc. (but maybe this is just the first of said libraries!)
Who cares at this point, Tailscale itself is the 600th reimplementation of the same idea, with predecessors like nebula and tinc. They came at the right time, with WireGuard being on the rise, and poured millions into advertisements that their community "competitors" didn't have since most of them isn't riding on VC money.
I've met a lot of people who think Tailscale invented what it does.
Prior to Tailscale there were companies -- ZeroTier and before it Hamachi -- and as you say many FOSS projects and academic efforts. Overlay networks aren't new. VPNs aren't new. Automated P2P with relay fallback isn't new. Cryptographic addressing isn't new. They just put a good UX in front of it, somewhat easier to onboard than their competitors, and as you say had a really big marketing budget due to raising a lot when money was cheap.
Very few things are totally new. In the past ten years LLMs are the only actually new thing I've seen.
Shill disclosure: I'm the founder of ZeroTier, and we've pivoted a bit more into the industrial space, but we still exist as a free thing you can use to build overlays. Still growing too. Don't have any ill will toward Tailscale. As I said nobody "owns" P2P and they're doing something a bit different from us in terms of UX and target market.
These "dumb pipe" tools -- CLI tooling for P2P pipes -- are cool and useful and IMHO aren't exactly the same thing as ZT or TS etc. They're for a different set of use cases.
The worst thing about the Internet is that it evolved into a client-server architecture. I remain very cautiously optimistic that we might fix this eventually, or at least enable the other paradigm to a much greater extent.
I know it wasn't a "new" idea, but still, ZT was a paradigm shift for me. I was suddenly on the same LAN with people I cared about. Thank you for making it happen.
> put a good UX in front of it
It's good as long as everything works out of the box, but it's a nightmare when something doesn't work. Or at least that has been my experience. I'm used to always troubleshoot first when I have any issue, but with Tailscale I decided I'm done trying to fight it, next time something doesn't work I'll just open a ticket and make it the ops team problem.
This is true for all systems that hide a lot of complexity. Apple is great until something doesn't work and you get things like "Error: try again later." A car is great until it doesn't start, and there are numerous reasons that can happen.
I remember running Hamachi and NoIP DUC's (Dynamic Update Client) as a kid in late 2000's to expose private server addresses for games or for multiplayer through direct network addresses
NoIP was also the recommended "easy" option for configuring RAT (Trojan) host addresses at the time IIRC.
Hamachi was BIG in the gaming scene. I used to host a Tibia server and use it to make the server accessible to friends.
As one of the iroh developers I must say thank you for creating ZeroTier! It absolutely was part of the inspiration and it's seamless functioning continues to amaze me daily. Something that continues to drive me to strive for as seamless an experience in iroh.
I love the fact we can make different tools learning from each other and approaching making p2p usable in different ways.
As others have said Hamachi was very popular in some gaming communities. I don't know quite how it fits technologically, but a similar user experience seems to come from playit.gg[1].
My friends and I used Hamachi in the early 2000s to play StarCraft and other games over the internet without involving online services. Worked great. I’ve got a soft spot for it.
As much as hyped tailscale is, at least there is an option to fully self-host coordination server. Do you have something like that?
ZeroTier controllers can be self-hosted.
It doesn't look fully independent from ZT. It's maintained by you guys. Headscale is fully independent and has much clear, easy to follow docs
TailScale sells certificate escrow, painless SSO, high-quality integrations/co-sell with e.g. Mullvad, full-take netlogging, and "Enterprise Look and Feel" wrapped around the real technology. You can run WireGuard yourself, and sometimes I do, but certificate management is tricky to get right, the rest is a pain in the ass, and TailScale is cheap. The hackers behind it (bfitz et all) are world-class, and you can get it past most "Enterprise" gatekeeping.
It doesn't solve problems on my personal infrastructure that I couldnt solve myself, but it solves my work problem of getting real networking accepted by a diverse audience with competing priorities. And its like 20 bucks a seat with all the trimmings. Idk, maybe its 50, I don't really check because its the cheapest thing on my list of cloud stuff by an order of magnitude or so.
Its getting more enterprise and less hackerish with time, big surprise, and I'm glad there's younger stuff in the pipe like TFA to keep it honest, but of all the necessary evils in The Cloud? I feel rather fondly towards tailscale rather than with cold rage like most everything else on the Mercury card.
I've managed a Wireguard-based VPN before Tailscale. It's pretty straightforward[0].
Tailscale makes it even more convenient and adds some goodies on top. I'm a happy (free tier) user.
[0] I also managed an OpenVPN setup with a few hundred nodes a few decades back. Boy do we have it easy now...
Iroh is much better suited for the application layer. You can multiplex multiple QUIC streams over the same connection, each for a specific purpose. All you need is access to QUIC, no virtual network interface.
It’s a bit like gRPC except you control each byte stream and can use one for, say, a voice call while you use another for file transfer and yet another for simple RPC. It’s probably most similar to WebRTC but you have more options than SCTP and RTMP(?).
This is made using iroh, which aims to be a low level framework for distributed software. Involves networking but also various data structures that enable replication and consistency between networked nodes.
Does it include reconnection logic? I presume that's not considered "low level", but it does always annoyingly have to be reimplemented every time you deal with long-lived socket connections in production.
yes, to an extent. It will time out if the connection completely dies for more than the timeout interval, but all connections are designed to survive changes to network changes like IP address or network interface (eg: switching from WiFi to ethernet, or cellular)
Iroh is one of these low level libraries. It is basically p2p QUIC, where p2p means 1. addressing by node id and 2. hole punching.
Dumbpipe is meant to be an useful standalone tool, but also a very simple showcase for what you can do with iroh.
Connecting phones on mobile/cignat with Tailscale is really one of the few software "Aha" moments I've had.
Isn’t tailscale a wrapper around WireGuard? With some other hole-punch sprinkles?
Well, WireGuard and WebRTC, but yes.
The real feature of Tailscale is being able to connect to devices without worrying about where they are.
You might be confusing it with netbird, which is the 601st implementation of a mesh network that does use both WebRTC and WireGuard.
There's no WebRTC in Tailscale.
Isnt a derp server juat webrtc with minor changes?
You don't need the whole of WebRTC for NAT traversal, TURN/STUN will do the job.
Edit: apparently it uses STUN/TURN but not WebRTC
Nat punch is a big part of it, but so is key management/sync, and configuration management.
...and DNS, and host provisioning, and SSO, and RBAC, and other stuff you need to sell to enterprises.
tailscale is a wrapper around wireguard in the same way that dropbox is a wrapper around rsync
Theres overlap but i can see complementary uses as well. It uses some of the same STUN-family of tecniques. I have no plans to stop using TailScale (or socat) but i think i use this every day now too.
iroh is meant to be this library, but there is also libp2p, which existed before iroh.
Part of the problem with libp2p is that the canonical implementations are in Go which isn’t really well-suited to use from C++, JS, or Rust. The diversity of implementations in other languages makes for varying levels of quality and features. They really should have just picked one implementation that would be well-suited to use via C FFI and provided ergonomic wrappers for it.
After writing a response about using this for games below, it occurred to me that most tunneling solutions have one or more fatal flaws that prevent them from being "the one true" tunnel. There are enough footguns that maybe we need a checklist similar to the "Why your anti-spam idea won’t work" checklist:
https://trog.qgl.org/20081217/the-why-your-anti-spam-idea-wo...
I'll start:
Your solution..
( ) Can't punch through NAT
( ) Isn't fully cross-platform
( ) Must be installed at the OS level and can't be used standalone by an executable
( ) Only provides reliable or best-effort streams but not both
( ) Can't handle when the host or peer IP address changes
( ) Doesn't checksum data
( ) Doesn't automatically use encryption or default to using it
( ) Doesn't allow multiple connections to the same peer for channels or load balancing
( ) Doesn't contain window logic to emulate best-effort datagrams over about 1500 bytes
( ) Uses a restrictive license like GPL instead of MIT
Please add more and/or list solutions that pass the whole checklist!Nice list.
I think iroh checks all the boxes but one.
( ) Doesn't contain window logic to emulate best-effort datagrams over about 1500 bytes
So you want a way to send unreliable datagrams larger than one MTU. We don't have that, since we only support datagrams via https://datatracker.ietf.org/doc/html/rfc9221 .
You could just use streams - they are extremely lightweight. But those would then be reliable datagrams, which comes with some overhead you might not want.
So how hard would it be to implement window logic on top of RFC9221 datagrams?
I'm not sure I fully understand this window logic question. QUIC does MTU discovery, so if the link supports bigger datagrams the MTU will go up. Unreliable datagrams using RFC9221 can be sent up to the MTU size minus the QUIC packet overhead. So if your link supports >1500 bytes then you should be able to send datagrams >1500 bytes using iroh.
I think the OP wants a built in solution to send unreliable datagrams larger than the MTU.
Fragmenting datagrams (or IP packets) is generally not a good idea. All protocol designs have been moving away from this the past few decades. If you want unreliable messages of larger than the MTU maybe taking some inspiration from Media-over-QUIC is a good idea. They use one uni-directional QUIC stream per message and include some metadata at the start of each stream to explain how old it is. If a stream takes too long to read to end-of-stream and you already have a newer message in a new uni-directional stream you can cancel the previous streams (using something like SendStream::reset or RecvStream::stop in Quinn API terms, depending on which side detects the message is no longer needed earlier). Doing this will stop QUIC from retransmitting the lost data from the message that's being slow to receive.
Right, I should have been more clear about that. Window logic was perhaps the wrong term, since I don't care about resends.
The use case I have in mind is for realtime data synchronization. Say we want to share a state larger than 1500 bytes, then we have to come up with a clever scheme to compress the state or do partial state transfer, which could require knowledge of atomic updates or even database concepts like ACID, which feels over-engineered.
I'd prefer it if the protocol batched datagrams for me. For example, if we send a state of 3000 bytes, that's 2 datagrams at an MTU of 1500. Maybe 1 of those 2 fails so the message gets dropped. When we send a state again, for example in a game that sends updates 10 times per second, maybe the next 2 datagrams make it. So we get the most recent state in 3 datagrams instead of 4, and that's fine.
I'm thinking that a large unreliable message protocol should add a monotonically increasing message number and index id to each datagram. So sending 3000 bytes twice might look like [0][0],[0][1] and [1][0],[1][1]. For each complete message, the receiver could inspect the message number metadata and ignore any previous ones, even if they happen to arrive later.
Looks like UDP datagram loss on the internet is generally less than 1%:
https://stackoverflow.com/questions/15060180/what-are-the-ch...
So I think this scheme would generally "just work" and hiccup every 5 seconds or so when sending 10 messages per second at 2 datagrams each and a 99% success rate, and the outage would only last 100 ms.
We might need more checklist items:
( ) Doesn't provide a way to get the last known Maximum Transmission Unit (MTU)
And optionally: ( ) Doesn't provide a way to get large unreliable message number metadata
Also there's no solution to punch through NAT.
Iroh will do hole punching through NATs. It will even work in many cases when there are NATs on both sides.
There are some limitations regarding some double NATs or very strictly configured corporate firewalls. This is why there is always the relay path as a fallback.
If you have a specific situation in mind and want to know if hole punching works, we got a tool iroh-doctor to measure connection speed and connection status (relay, direct, mixed):
https://crates.io/crates/iroh-doctor , can be installed using cargo install iroh-doctor if you have rust installed.
There might be some confusion here, holepunching is a core functionality of iroh. There are still some firewall configurations that iroh can not yet holepunch and that can still be improved, but in general the holepunching works rather well.
iroh is fantastic tech.
I attended Rüdiger's (N0) workshop 2 weeks ago at the web3 summit in Berlin and was left super inspired. The code for building something like this is available here https://github.com/rklaehn/iroh-workshop-web3summit2025 and I highly recommend checking out the slides too :)
Thank you for the praise! It is nice to hear that people enjoy these workshops.
I would love to see what people would build if they had a little bit more time with help from the n0 team. A one hour or even three hour workshop is too short.
At pico.sh we built something similar but using SSH: https://pipe.pico.sh
in a direct benchmark against dumbpipe. What do you think the results would be like?
Well pipe.pico.sh always uses a proxy server so throughput and latency are worse, but you have your own namespace for the pipes and thus don't have to synchronize random connection strings
https://github.com/anderspitman/awesome-tunneling - for anyone interested in the landscape of tunneling tools like this.
Does anyone know if this tech (or Iroh) is suitable for real-time networking for games? Basically, once connection is established, what's the overhead on top of UDP in terms of latency and bandwidth?
Edit: after digging a little, Iroh uses QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
Now what I'd love to figure out is if there's a way to use their relay hopping and connection management but send/receive data through a dumb UDP pipe.
> QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
This isn't right, as a sibling comment mentions. QUIC is a UDP-based protocol that handles stream multiplexing and encryption, but you can send individual, unordered, unreliable datagrams over the QUIC connection, which effectively boils down to UDP with a bit of overhead for the QUIC header. The relevant method in Iroh is send_datagram: https://docs.rs/iroh-net/latest/iroh_net/endpoint/struct.Con...
It would be nice if dumbpipe revealed the local and remote IP and UDP port numbers via something like STDERR or a signal so that apps could send UDP datagrams on them with ordinary socket calls. I'm guessing that QUIC uses a unique header in its first few bytes, so the app could choose something different and not interfere with the reliable stream.
A better solution would be to expose the iroh send_datagram and read_datagram calls somehow. Maybe if dumbpipe accepted a datagram flag like -d, then a second connection to a peer could be opened. It would recognize that the peer has already been found and maybe reuse the iroh instance. Then the app could send over either stream when it needs to be reliable or best effort.
This missing datagram feature was the first thing I thought of too when I read the post, so it's disappointing that it doesn't discuss it. Mostly all proof of concept tools like this are MVP, so don't attempt to be feature-complete, which forces the user to either learn the entirety of the library just to use it, or fork it and build their own.
IMHO that's really disappointing and defeats the purpose of most software today, since developers are programmed to think that the "do one thing and do it well" unix philosophy is the only philosophy. It's a pet peeve of mine because nearly the entirety of the labor I'm forced to perform is about working around these artificial and unintentional limitations.
Ok I just looked at https://www.dumbpipe.dev/install.sh
if [ "$OS" = "Windows_NT" ]; then
echo "Error: this installer only works on linux & macOS." 1>&2
exit 1
else
So it appears to be linux and macOS only, which is of little use for games. I'm shocked, just shocked that I'll have to write my own..> It would be nice if dumbpipe revealed the local and remote IP and UDP port numbers via something like STDERR or a signal so that apps could send UDP datagrams on them with ordinary socket calls.
I believe this would be even more unreliable than UDP, since Iroh is also capable of using a relay server for when hole punching can't be performed, and Iroh also handles IP migration.
> it appears to be linux and macOS only
Iroh should work on Windows, IIUC, just the installer and possibly prebuilt binaries aren't provided. But dumbpipe isn't designed for UDP anyways, it's closer to a competitor for socat/nc.
Yep! It's totally usable for games, and used in a few! One of my favs is the godot engine plugin: https://github.com/tipragot/godot-iroh
QUIC can do both reliable & unreliable streams, as can iroh
This reminds me a lot of the holepunch.to (previously hypercore-protocol)
What I wonder is this, is there a clever and simple way to share the secret phrase between two devices? The example is pretty long to manually enter "nodeecsxraxjtqtneathgplh6d5nb2rsnxpfulmkec2rvhwv3hh6m4rdgaibamaeqwjaegplgayaycueiom6wmbqcjqaibavg5hiaaaaaaaaaaabaau7wmbq"
I've always found this path to be more compelling:
https://github.com/samyk/pwnat
It has more edges and doesn't handle all cases, but it also avoids the need for any kind of intermediary.
Older solution that seems to have issues with some modern routers:
Heh, I was the last one to leave a comment on that issue 5 years ago..
My tool of choice is https://github.com/hyprspace/hyprspace
https://github.com/samyk/slipstream is the new one
I didn't know about this tool, that's pretty useful!
Too bad the broken nature of NAT means this approach will just ignore any firewall rules you have configured and any malicious device or program can leverage it to open inbound connections.
But most of all, Samy is my hero.
If you are in the mood for a slightly less dumb pipe, I’ve been building a tunnel manager CLI built on Iroh. Supports forwarding ports over TCP, UDP, and UNIX sockets. https://gitlab.com/CGamesPlay/qtm
About once or twice a year a solution comes out that does this. Here is a great one for orchestrating connections: https://docs.spacebrew.cc/
While that may be true, the branding of this particular project seems unbeatable. A literal dumb pipe man with wacky arms. It just works.
I feel it was the same for IFTTT over a decade ago. People always move on to the next shiny thing.
"In 2023 it's..."
Good article from Tailscale on how direct connections is established, even when both nodes is behind NAT: https://tailscale.com/blog/how-nat-traversal-works
I wonder how much different it is from Wireguard + netcat. Both run encrypted channels over UDP, but somehow differently. What does QUIC offer that Wireguard does not?
QUIC includes a standard for peer address discovery: https://www.ietf.org/archive/id/draft-ietf-quic-address-disc...
Wireguard doesn't, which is why tailscale took off so much, since it offers basically that at its core (with a bunch of auxiliary features on top).
Show me some wireguard discovery/relay servers if I'm wrong.
Also, QUIC is more language-agnostic. The canonical user-space implementation of wireguard is in Go, which can't really do C FFI bindings, and the abstractions are about dealing with "wireguard devices", not "a single dump pipe", so wireguards userspace library also makes it surprisingly difficult to implement this simple thing without also bringing a ton of baggage (like tun devices, gateways, ip address management, etc) along for the ride.
If you already have a robust wireguard setup, then of course you don't need this and can just use socat or whatever.
They both run over UDP and always encrypt data. Beyond that superficial similarity they are completely different.
QUIC is a transport protocol that provides a stream abstraction (like TCP), with some improvements over TCP (like built-in support for multiplexing streams on the same connection, without head-of-line blocking issues).
Wireguard provides a network interface abstraction that acts as NIC. You can run TCP on top of a wireguard NIC (or QUIC for that matter).
Wireguard is a tunneling protocol. Netcat lets you write things over a socket. But netcat doesn't implement mechanisms for guaranteeing that all your packets arrive over UDP mode, so you're forced to tunnel TCP over UDP for reliability.
QUIC is all UDP, handling the encryption, resending lost packets, and reordering packets if they arrive out of order. The whole point of QUIC is to make it so you can get files transferred quickly.
WireGuard doesn't know the data you're sending, and netcat+TCP is stuck with the limitations of every packet needing to be sent and acknowledged sequentially.
Wireguard is opaque about the independent streams in its connection. So, while they both can encapsulate multiple concurrent streams in one connection, QUIC can do things like mitigate Head-of-Line Blocking and manage encryption at the transport layer. It also uses a connection ID on these substreams which helps make transitioning across network changes seamless.
If you set up multiple TCP connections over Wireguard, there is no head-of-line blocking either. And Wireguard also transitions across network changes.
In fact, it's one of the main reasons I use Wireguard. I can transition between mobile network and wifi without any of the applications noticing.
I've been using this for some years now https://magic-wormhole.readthedocs.io/en/latest/index.html
user lotharrr https://news.ycombinator.com/user?id=lotharrr is the author of magic wormhole
Who is paying for the relays and why?
We (n0) are running one set of relays. They are rate limited, so they basically only help with the hole punching process.
Projects or companies that use iroh can either run their own relays or use our service https://n0des.iroh.computer/ , which among many other things allows spinning up a set of dedicated relays.
Just a head's up, I'm getting a 404 on the link to the relay docs (https://www.iroh.computer/docs/layers/connections) when attempting to click through.
Thanks for the heads up! We've fixed: https://github.com/n0-computer/dumbpipe.dev/pull/11
appreciate this, the links on dumbpipe.dev are updated now!
Does this require a 3rd party host, or is it peer-to-peer?
it's p2p. dumbpipe is hardcoded to use a public set of relays that we run for free (we being number 0, the team that make iroh & dumbpipe).
we can definitely add a config argument to skip the hardcoded relays & provide custom ones!
Thanks for the response. This statement confuses me a bit. What is a relay? Does traffic go through it at all, or is it for connection negotiation, or some of both?
Your questions are answered in TFA, including multiple links to documentation about the process.
sibling comment with links to docs is the more accurate, but to summarize, it's some of both:
* all connections are always e2ee (even when traffic flows through a relay)
* relays are both for connection negotiation, and as a fallback when a direct connection isn't possible
* initial packet is always sent through the relay to keep a fast time-to-first-byte, while a direct connection is negotiated in parallel. typical connections send a few hundred bytes over the relay & the rest of the connection lifetime is direct
tl;dr: resolving ids to destinations requires a third party relay
relays _can_ be used to resolve IDs, but so can mDNS, an email or any other form of third party channel
the use case here is somebody opens a web browser and types/pastes an ID into the top bar -- and it needs to resolve, correctly, without prior knowledge, in roughly the same amount of time that DNS takes today
relays are the only thing among the things you listed that even have a chance of solving this problem
Peer to peer, unless both ends are behind NAT, then you need to run a relay.
They provide a default relay. It’s not clear to me whether you can manually specify a different relay.
They have docs for using self-hosted relays:
https://github.com/n0-computer/iroh/blob/main/iroh/docs/loca...
Unfortunately the link to the Config struct is broken. It should be:
https://github.com/n0-computer/iroh/blob/main/iroh-relay/src...
It'd be nice if the Getting Started link on the n0des page went here instead of immediately asking me to sign up before I know what the hell I'm signing up for
Dumbpipe is using our set of relays. It is meant as a standalone tool as well as a showcase for what you can do with iroh.
If you use iroh as a library, you can specify your own relays.
It is important to mention that relays are interoperable, so you don't have isolated bubbles of nodes using certain relay networks. I can have the n0 relays specified and still talk to another node that is using a different set of relays.
Is there a network topology where two hosts, each behind one or more layers of NAT, can both initiate outbound connections to public internet services (e.g., google.com), but are unable to establish a direct peer-to-peer connection due to NAT traversal limitations? I understand that NAT hole punching can work with a single level of NAT, but does it still function reliably across multiple layers of LAN/NAT hierarchy?
iroh is awesome, and this is such a good demo of how stupid simple it is to use
This is really good marketing presentation for a command line tool
Very handy. We've developed an industrialized variant of this in RelayKit designed for fleets of fielded devices at scale with Anycast, mTLS, multiplexing of services through a single tunnel, Bring Your Own PKI and some other fleet management features that together become a somewhat smarter pipe: https://farlight.io
The surface being http is super nice to have. It's a streams-over-http general utility, quic powered.
I'm struggling to remember what but there's a simple http service called like patchbay or some such that's a store and forward pattern. This idea of very simple very generic http powered services has a high appeal to me.
Looking forward to a future version that can do WebTransport
>These dumb pipes use QUIC over a magic socket. It may be dumb, but it still has all the features of a full QUIC connection: UDP-based, stream-multiplexing and encrypted.
How is multiplexing used here? On the surface it looks like a single stream. Is the file broken into chunks and the chunks streamed separately?
In this particular example there is no multiplexing. It's just one QUIC stream.
In other iroh based protocols the ability to have many cheap QUIC streams without head-of-line blocking is very useful. E.g. we got various request/response style protocols where a large number of requests can be in flight concurrently, and each request just maps to a single QUIC stream.
The marketing is brilliant. The name of the company (number0) is mad hackerish man, right up my alley in the words of Charlie Murphy. I'm going to try this in my GCE on bare metal "unvirtualizer" today (number0 is what a Linux kernel would call the first tuntap with number as its prefix if you had such a patch).
These are my kind of people!
Every time I see `curl ..| sh` I feel bad. It shouldn't be a norm to run a downloaded script in shell.
And especially not run the script while it's downloading. The remote server can detect timing difference (let's say the script has "sleep 30" and the buffer fills) and send a different response (really easy if using chunked encoding or HTTP2 data frames).
The script is 790 bytes, you can't fill a pipe with that.
Oh they use Iroh notice!
Kinda related to this, but is there something that runs a daemon on your local machine, where if a "file request document" is uploaded to mega or Google drive or something similar the (,polling) daemon recognizes the request and pushed the document/file to the file store service?
Dropbox :)
Or whatever ftp thing they mentioned on the Dropbox show HN ;)
I remember doing something like this with Skype many years ago (at least 15, I guess).
The old Skype, the one that was a real p2p app and before it got bought by Microsoft, was very good slicing through firewalls and NATs and it offered a plugin api, so it was easy to implement a TCP tunnel with it.
Question: What is the security level behind this? I guess if it is "dumb" _anyone_ can input your identifier for the pipe and connect to it?? Or even listen on it?
Anybody who has the ticket and therefore has the public key can connect.
Once connected, the connection is encrypted using TLS with the raw public keys in TLS extension ( https://datatracker.ietf.org/doc/html/rfc7250 ).
So if it single-point, there will be a really small window where someone could try to brute-force it (almost impossible, I know), but if it is multi-point (i.e. multiple users can connect to that endpoint) then it could be brute-forced and connect to it? I couldn't see if it is single-point of multiple-send...
Let me know if my understanding is incorrect, I don't have much experience with QUIC :)
I am not one of the cryptographers on the team, but I will try to answer to the best of my knowledge.
QUIC is specifying TLS, specifically TLS 1.3 or larger. From the RFC 9001 (Using TLS to Secure QUIC): "Clients MUST NOT offer TLS versions older than 1.3.".
For the first request, brute forcing would mean guessing a 32 byte Ed25519 public key. That is not realistically possible.
For subsequent requests, even eavesdropping on the first request does not allow you to guess the public key, since the part of the handshake that contains the public key is already encrypted in TLS 1.3.
With all that being said, if you want to have a long running dumbpipe listen, you might want to restrict the set of nodes that are allowed to connect to it. We got a PR for this, but it is not yet merged.
10 years ago I made an "encrypted voice channel" by chaining the following 3 commands together (I dont remember exactly how it looked, this is just a sketch):
arecord - | openssl aes-128-cbc -pass:'secretstring' - | nc <dest ip> <dest port>
on the receiving end nc -l <dest port> | openssl aes-128-cbc -pass:'secretstring' | aplay -
I don't remember exactly which audio device I used back then. It worked okay-ish, but there was definitely lag from somewhere. Just kind of neat that you can build something so useful without a bloated app, just chaining a few commands together.This is cool
Wondering about a speed. What transport it uses? Is it establishing a p2p connection? Or transfering all through central server?
Reminds me of https://docs.pears.com/tools/hyperbeam
I love the Mr. Dumb Pipe character
Is there a way I can use this to run remote commands on another host? Something similar to what ssh does?
You could pipe to bash?
Ah right, but this does not support bidirectional streaming so I won't be able to get the remote stdout on the client, I guess.
Couldn’t you just pipe the stdout to another dumbpipe
Not a very friendly API
This works:
Remote:
$ socat TCP-LISTEN:4321,reuseaddr,fork EXEC:"bash -li",pty,stderr,setsid,sigint,rawer&
$ dumbpipe listen-tcp --host 127.0.0.1:4321
using secret key fe82...7efd
Forwarding incoming requests to '127.0.0.1:4321'.
To connect, use e.g.:
dumbpipe connect-tcp nodeabj...wkqay
Local: $ dumbpipe connect-tcp --addr 127.0.0.1:4321 nodeabj...wkqay&
using secret key fe82...7efd
$ nc 127.0.0.1 4321
root@localhost:~#
You mean using "curl | sh" is a dumb pipe? Well, couldn't agree more :)
> In 2023 it's hard to connect two devices directly
It's not. Use tailscale.
Dumb is the new smart.
You can also do this via "netcat", running the commands below in a terminal.
Receiver (listening to port 31337):
`nc -l -p 31337`
Sender (connecting to receiver IP):
`nc <receiver_ip> 31337`
Want to send a message to the receiver:
`echo "Hello from Kocial" | nc <receiver_ip> 31337`
== if you want to send a file ==
Receiver:
`nc -l -p 31337 > hackernews.pdf`
Sender:
`nc <receiver_ip> 313337 < hackernews.pdf`
This doesn't punch through NATs at all, which is the point of Dumb Pipe
thanks everyone for all the love <3
[maintainer of iroh here]
Dumb question: can I port forward using this?
You can forward a locally running TCP service using dumbpipe listen-tcp.
E.g. you got a local development webserver running on 127.0.0.1:3000. You can expose this via dumbpipe using
dumbpipe listen-tcp --host 127.0.0.1:3000
You get a node ticket that contains details on how to connect. Put it into https://ticket.iroh.computer/ if you want to know what's in it.
Then on the other side, e.g. on a small box in the cloud, you can do this:
dumbpipe connect-tcp --addr 0.0.0.0:80 <ticket>
Any TCP request to the cloud box on port 80 will be forwarded to the dev webserver.
(2023)
Neato
Perfect for malware
"Dumb pipe" sounds like they forgot about security.
I don't understand this perspective. Dumb primitives that are secure is _exactly_ how you build secure system!
You could describe this same project as "a smart pipe that punches through NATs & stays connected (...)" and it wouldn't be any more surprising or inaccurate than the current description. So maybe it is not that descriptive.
> ... that are secure ...
That's a huge assumption I wouldn't make after reading "dumb".
And from the article:
> Easy, direct connections that punch through NATs & stay connected as network conditions change.
This sounds more like a pipe that is trying to be smart. According to your principle, not something to build a secure system with.