> We cannot issue an IPv4 address to each machine without blowing out the cost of the subscription. We cannot use IPv6-only as that means some of the internet cannot reach the VM over the web. That means we have to share IPv4 addresses between VMs.
Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.
Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.
(exe.dev co-founder here)
IPv6 does not work on the only ISP in my neighborhood that provides gigabit links. I will not build a product I cannot use.
Even when IPv6 is rolled out, it is only tested for consumer links by Happy Eyeballs. Links between DCs are entirely IPv4 even when dual stacked. We just discovered 20 of our machines in an LAX DC have broken IPv6 (because we tried to use Tailscale to move data to them, which defaults to happy eyeballs). Apparently the upstream switch configuration has been broken for months for hundreds of machines and we are the first to notice.
I am a big believer in: first make it work. On the internet today, you first make it work with IPv4. Then you have the luxury of playing with IPv6.
A service that only does IPv6 is not "working" any more. I'm not saying to go v6 only, but there's no excuse to not support IPv6.
Have you looked at each service running through a cloudflare tunnel or (HE offers something similar too)?
(PS: I use exe.dev quite a lot whenever I want to have a project and basic scripting doesn't work and I want to have a full environment, really thanks for having this product I really appreciate it as someone who has been using it since day one and have recommended/talked about your service in well regards to people :>)
You can get this effect today by installing Tailscale on your exe.dev VM. :)
The reason we put so much effort into exposing these publicly is for sharing with a heterogeneous team without imposing a client agent requirement. The web interface should be easy to make public, easy to share with friends with a Google Docs-style link, and ssh should be easy to share with teammates.
That said, nothing wrong with installing tunneling software on the VM, I do it!
This is great if you have IPv6 support from your ISP. Not so great if you don't.
Before someone mentions tunnels: Last time I tried to set up a tunnel Happy Eyeballs didn't work for me at all; almost everything went through the tunnel anyway and I had to deal with non-residential IP space issues and way too much traffic.
ISPs won't bother with IPv6 until they've either run out of IPv4 space or the internet starts to use IPv6's advantages.
Discussions about IPv6 quickly end with "we have enough v4 space and there are no services that require v6 anyway". As long as the extra cruft for v4 support remains free or even supported, large ISPs won't care. We're at the point where people need to deal with things like peer to peer connectivity with two sides behind CGNAT which require dedicated effort to even work.
I know it sucks if none of the ISPs in your area support IPv6 and you're left with suboptimal solutions like tunnels from HE, but I think it's only reasonable all this extra cost or effort becomes visible at some point. Half the world is on v6, legacy v4-only connections are becoming the minority now.
I have has native IPv6 since 2010, from two different ISPs.
It is also available for one of my phone contracts but not tried enabling it yet.
Well, you're very lucky (genuinely).
In 2025, I tried to access my services using IPv6 with 4G phones and different subscriptions (different ISPs), fact is, many (most?) of them did not support IPv6 at all :(
I had to revert to IPv4. And really I have nothing against IPv6, but yeah, as a simple user, self hosting a bunch of services for friends and family: it was simply just not possible to use only IPv6 :(
(for context, the 4G providers are French, in metropolitan France)
There is not a single ISP in Australia that doesn't provide proper IPv6 support, with the vast majority granting /48 addresses.
It is purely and excuse to not use it in 2026.
There is not a single ISP in my area that provides any IPv6 support whatsoever. This is also the case for many, many millions of others around the world.
Super interesting, but the person you're responding to lives in France.
My phone contract that does offer IPv6 is with Free, I could not work out whether it would disable IPv4 if I enabled IPv6 so have not tried changing it.
Conversely, I had IPv6 for about 5 years from an ISP and when I switched providers, the new ISP was IPv4 only. A few years later and they now support IPv6, but my firewall setup is now IPv4 only, so I've not bothered to update it.
(exe.dev co-founder here)
We are not running out of IPv4 space because NAT works. The price of IPv4 addresses has been dropping for the last year.
I know this because I just bought another /22 for exe.dev for the exact thing described in this blog post: to get our business customers another 1012 VMs.
Yep. As sad as it is for p2p, NAT handles most uses cases for users, and SNI routing (or creative hacks like OP) handles most use cases for providers.
I was surprised how low IPv4 prices have gotten. Lowest since at least 2019.
Amazingly even most p2p works with NAT, see (and I am biased here) Tailscale.
I certainly wish we simply had more addresses. But v4 works.
Your NAT traversal article is amazing, but sadly the long tail (ha) means any production quality solution has to have relays, which is a huge complexity jump for people who just want to run some p2p app on their laptop.
And it's not clear it will ever be better than it is now with CGNAT on the rise.
Would love to hear I'm wrong about this.
Are there really ISPs that don't support IPv6? I've had IPv6 from various ISPs since around 2010, and even my phone gets an IPv6 address from the cellular network.
Yes and it's ANNOYING. In Switzerland there is literally not one cellular network that issues IPv6 addresses. Also my workplace network (a school using some sort of Microslop solution) doesn't issue IPv6es.
I have a IPv6-only VPN with some personal services. Theoretically, the data can be transported via IPv4, but Android doesn't even query AAAA records if it doesn't have a route for [::]/0. So when I'm not home, I can't reach my VPN servers, because there is supposedly no address.
(I fix it by routing all IPv6 traffic through my VPN. Just routing connectivitycheck may suffice though).
Anything Microsoft lacking V6 is configuration issue - ever since Vista, Windows networking (in corporate) treats v4-only as somewhat "degraded" configuration (some time ago there was even a funny news post about how Microsoft was forced to keep guest WiFi with enabled v4, having switched everything else to V6 only)
It varies in different parts of the world. Here in New Zealand all except one fixed line (i.e. fibre/xDSL) provider offers IPv6 (the only hold out being the ex-government telco). Wireless/mobile (4G/5G mobile or FWA) is a different story however as all wireless/mobile networks are IPv4 only still to this day (even thogh two of them are also fixed line providers offering IPv6 via their fixed line service!).
Bell Canada does not provide IPv6 to Internet customers but their cell network does support it. They're one of what we call "the big three".
https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
Looks like Canada has roughly 40% adoption, and USA roughly 50% adoption.
I complained as a yearly tradition for couple of years to get v6 enabled in my ISP. They had the core network enabled on World IPv6 Launch in 2012, but not deployed to end customers.
One simple way to check if your ISP have some kind of IPv6 netowork is to see if CDN domains given by YouTube and Facebook have AAAA records.
We shouldn't have to ask for ISPs to add IPv6 support but here we are.
You could also provide a dual stack jump host. Then v4-only clients just set the ProxyJump option to get to all the v6-only hosts via the jump host.
Why not just assign across different ports? Seems like a straightforward solution.
My guess is that they want to keep the url clean.
I have seen that port technique used in NAT servers.
They could have done that in addition (and maybe they do), but for some of their customers it then may not work, for reasons hard to understand as a customer. Especially when changing locations frequently it may sometimes work and sometimes not ... not good for keeping customers
This is the way.
Op solved a problem and your comment is "I wouldn't have solved the problem".
>legacy IP
lol
It's a nice solution for sure, but a problem by choice. You could just have an AAAA record for the domain in addition to the A record, and as GP pointed out, resolve SSH sessions via the IPv6. If the user wants SSH to work with IPv4 for whatever reason—I see the point that there may be some web visitors without IPv6 still, but devs?—they could pay a small extra for a dedicated IPv4 address.
Products targeted at developers like to get a foothold in large corporations "by stealth" - let the developers experience what a great product it is first, before they have to do the approval paperwork.
With this IPv4 trick, if your employer or university only provides IPv4 you can use the product anyway.
They could buy a dedicated IPv4 address, but that address still has to be tunneled through [EDIT:] IPv6 networks if that dev has no access to [EDIT:] IPv4 networks. Thus DX still suffers. [ADDENDUM: I mistakenly swapped "IPv4" and "IPv6" there. See comments.]
I'm not sure I understand your point; if exe.dev operates a dedicated IP solely so a specific mythical IPv6-less developer can connect to a specific server, then there's no tunnelling involved at all.
Oops, I think I mixed up two sentences in the middle. A fixed comment is available. But I also probably misinterpreted what you were saying:
> they could pay a small extra for a dedicated IPv4 address.
Did you mean that the dedicated IPv4 address to connect via SSH? Then my objection doesn't apply.
I've worked in big companies long enough to know that "deprecated" or "legacy" mean "the thing we actually rely on"
They are saying they want to directly SSH into a VM/container based on the web hostname it serves. But that's not how the HTTP traffic flows either. With only one routable IP for the host, all traffic on a port shared by VMs has to go to a server on the host first (unless you route based on port or source IP with iptnbles, but that is not hostname based).
The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.
What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:
ssh user@vm1.box1.tld becomes: ssh -j jumpusr@box1.tld user@vm1
And just make jumpusr have no host permissions and shell set to only allow ssh.
> The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM.
That's one implementation. Another implementation is the proxy looks at the SNI information in the ClientHello and can choose the correct backend using that information _without_ decrypting anything.
Encrypted SNI and ECH requires some coordination, but still doesn't require decryption/trust by the proxy/jumpbox which might be really important if you have a large number of otherwise independent services behind the single address.
The point is that they want the simple UX of "ssh vm1.box1.tld" takes you to the same machine that browsing to vm1.box1.tld takes you to, without requiring their users to set any additional configuration.
You can have that already? It's just dns. Are you saying different vms share the same box1 ip? Well then yeah, you want a reverse proxy on some shared ip.
> Well then yeah, you want a reverse proxy on some shared ip.
At that point you run into the problem that SSH doesn't have a host header and write this blog post.
Yeah, ftp has the same issue depending on implementation.
Most host/port services have the same issue, even https used to have it and it's the reason SNI was introduced. But if by implementation you mean sftp, then of course - it uses Ssh
I wonder if SSH supports SRV records and if it would help.
I ended up doing something like this for a separate use case (had to host a bunch of Drupal instances, and for some reason end users needed shell access).
For the proxy I did not rely on a “proper” ssh daemon (like openssh), but wrote my own using a go library called gliderlabs/ssh. That in particular allowed me to implement only a tcp forwarding callback [1] , and not provide any shell access on a protocol level. Also made deployment nicer - no need for a full VM, just a container was sufficient.
It is also worth nothing that the -j can be moved into .ssh/config using the ProxyJump option. It does mean end users need a config file - but it does allow typing just a plain ssh command.
[1] https://pkg.go.dev/github.com/gliderlabs/ssh#ForwardedTCPHan...
If jump host shell aliases were a valid option, then setting a port would be a much easier valid option.
>They are saying they want to directly SSH into a VM/container based on the web hostname it serves. But that's not how the HTTP traffic flows either.
> Proceeds to explain how the HTTP traffic flows based on the hostname.
If you wanted to flex on your knowledge of the subject you could have just lead the whole explanation with
>"I know all about this, here's how it works."
Also
>"What they want is a reverse proxy for SSH"
They already did this, I'm much more impressed by the original article that actually implemented it than by your comment "correcting them" and suggesting a solution.
SSH is an incredibly versatile and useful tool, but many things about the protocol are poorly designed, including its essentially made-up-as-you-go-along wire formats for authentication negotiation, key exchange, etc.
In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:
https://cryptographycaffe.sandboxaq.com/posts/survey-public-....
That's the point, though. An SSH key gives authentication, not authorization. Generally a certificate is a key signed by some other mutually trusted authority, which SSH explicitly tried to avoid.
SSH does support certificate based auth, and it’s a great upgrade to grant yourself if you are responsible for a multi human single user system. It grants revocation, short lifetime, and identity metadata for auditing, all with vanilla tooling that doesn’t impose things on the target system.
> multi human single user system
A rather niche use-case to promote certificate auth... I'd add the killer-app feature is not having to manage authorized_keys.
They are remarkably common in long lived enterprise Linux servers. Think eg database servers or web servers where they are of the (much longer lived) pet era not cattle era.
Not sure why you need to belittle one example just to add another
Agreed, this makes sense in principle.
But what I found, empirically, is that a substantial number of observable SSH public keys are (re)used in way that allows a likely-unintended and unwanted determination of the owner's identities.
This consequence was likely not foreseen when SSH pubkey authentication was first developed 20-30 years ago. Certainly, the use and observability of a massive number of SSH keys on just a single servers (ssh git@github.com) wasn't foreseen.
You can also sign ssh host keys with an ssh ca.
See ssh_config and ssh-keygen man-pages...
What good does certificate format do? Certainly won't make people not reuse it the same way.
> where the affected users might be surprised or alarmed to learn that it is possible to link these real-world identities.
I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.
> > where the affected users might be surprised or alarmed to learn that it is possible to link these real-world identities.
> I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.
You're probably not the only one for whom it's obvious, but it appears to be not at all obvious to large numbers of users.
ssh by default sends all your public keys to a server. Yes you can limit some keys to specific hosts but it's very easy to dox yourself.
Doesn’t it try one key at a time rather than send all?
True but a server that wants to "deanonymize" you can just reject each key till he has all the default keys and the ones you added to your ssh agent.
You can try it yourself [0] returns all the keys you send and even shows you your github username if one of the keys is used there.
[0] ssh whoami.filippo.io
Nice, tried it out. This wording is incorrect though:
"Did you know that ssh sends all your public keys to any server it tries to authenticate to?"
It should be may send, because in the majority of cases it does not in fact send all your public keys.
It does, and there's typically a maximum number of attempts (MaxAuthTries defaults to 6 IIRC) before the server just rejects the connection attempt.
Yep, but this is server-side setting. Were I a sniffer, I would set this to 10000 and now I can correlate keys.
Modern sshd limits the number of retries. I have 5 or 6 keys and end up DoSing myself sometimes.
This thread made me realize why fail2ban keeps banning me after one failed password entry :lightbulb:
I had never thought about that. Seems like an easy problem to fix by sending salted hashes instead.
The server matches your purposed public key with one in the authorized keys file. If you don't want to expose your raw public key to the server, you'll need to generate and send the hashed key format into the authorized keys file, which at that point is the same as just generating a new purpose built key, no? Am I missing something?
so it's good practice to store key in non-default location and use ~/.ssh/config to point the path for each host?
What a great case of "you're holding it wrong!" I need to add individual configuration to every host I ever want to connect to before connecting to avoid exposing all public keys on my device? What if I mistype and contact a server not my own by accident?
This is just an awfully designed feature, is all.
> add individual configuration to every host I ever want to connect
Are you AI?
You can wildcard match hosts in ssh config. You generally have less than a dozen of keys and it's not that difficult to manage.
I have over a dozen ssh keys (one for each service and duplicates for each yubikey) and other than the 1 time I setup .ssh/config it just works.
I have the setting to only send that specific host’s identity configured or else I DoS myself with this many keys trying to sign into a computer sitting next to me on my desk through ssh.
Like I can’t imagine complaining about adding 5 lines to a config file whenever you set up a new service to ssh onto. And you can effectively copy and paste 90% of those 5 short lines, just needing to edit the hostname and key file locations.
I would say it's best practice to use a key agent backed by a password manager.
Specifically to use a different key for each host.
SSH does have a certificate format that can place restrictions on what the user can do when connecting with that key. I'm not so sure about the hostkey side of things though.
For example: https://smallstep.com/blog/ssh-vs-x509-certificates/#certifi... you can see here that X11 forwarding is permitted for this certificate, among other things.
I would love it if more systems just understood SRV records, hostname.xyz = 10.1.1.1:2222
So far it feels like only LDAP really makes use of it, at least with the tech I interact with
Even with SRV records, there’s still the problem of middleboxes restricting protocol traffic to certain ports. (There’s another comment thread in which we discuss this.) In practice, SRV records work much better inside network borders than on the larger Internet.
This has history: https://egopoly.com/2008/02/ssh-slow-on-leopard.html
I also know of https://github.com/Crosse/sshsrv and other tricks
I agree more SRV records would have helped with a tremendous number of unnecessary proxies and wasted heat energy from unnecessary computing, but in this day and age, I think ECH/ESNI-type functions should be considered for _every_ new protocol.
SRV is essentially a simple layer of abstraction that provides (via one approach) the required end result (reachability + UX) that is easy to add to any $PROTO client without. Supporting ESNI would complicate the actual lib/protocol, increase the amount of dev and maintenance work required all around, significantly increase complexity, and require more infrastructure and invasive integration than any DNS-enabled service already uses.
It’s also similar with mDNS on local networks. It’s actually nice!
Overall, DNS features are not always well implemented on most software stack.
A basic example is the fact that DNS resolution actually returns a list of IPs, and the client should be trying them sequentially or in parallel, so that one can be down without impact and annoying TTL propagation issues. Yet, many languages have a std lib giving you back a single IP, or a http client assuming only one, the first.
I love that kubernetes does this for cluster service domain names
SSH waits for the server key before it presents the client keys, right? Does this mean that different VMs from different users have the same key? (Or rather, all VMs have the same key? A quick look shows s00{1,2,3}.exe.xyz all having the same key.) So this is full MitM?
You are correct, but I expect they instruct their users to run with a host key validation disabled ( StrictHostKeyChecking=no UserKnownHostsFile=/dev/null) , as they expect these are ephemeral instances.
I mean, anytime you use the cloud for anything, you are giving MITM capabilities to the hosting provider. It is their hardware, their hypervisors... they can access anything inside the VMs
Not if it's using Confidential Computing. Then you're trusting "only" the CPU vendor (plus probably the government of the country where that vendor is located), but you're trusting the CPU already.
This approach doesn't give access from the hypervisor to your private keys it gives access to other tenants to your private keys.
I think the vulnerability would be that not only the host can now MITM, but other co-tenants would have the capability to bypass that MiTM protection.
There are about 60k ports you can choose from for each IP, so I don’t understand why you can’t just give one user 1.2.3.4:1001 and the other 1.2.3.4:1002 and route that.
Setting it up like this where you just assume:
> The public key tells us the user, and the {user, IP} tuple uniquely identifies the VM they are connecting to.
Seems like begging for future architectural problems.
Something like getting SSH to support SRV records would allow that to be transparent to the user: https://github.com/Crosse/sshsrv
Then you need a firewall update for each new user.
Whereas matching on user+ip is a one-time proxy install.
Yeah, I ran into this problem too. I tried a few different hacky solutions and then settled on using port knocking to sort inbound ssh connections into their intended destinations. Works great.
I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.
But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.
When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.
Been using this for a few years and no problems so far.
Doesn't this require configuration at the end user, so you could just as easily ProxyJump or use a different port?
It's a nice solution but I've been looking for something more transparent (getting them to configure an SSH key is already difficult for them). A reverse proxy that selects backend based solely on the SSH key fingerprint would be ideal
That's all true, but juggling connections based on key fingerprints would also require users to have different keys for different containers -- which is good practice, but I've found that it's equally difficult for users unfamiliar with ssh to set up and properly manage more than one key, and it's equally easy for users familiar with ssh to manage multiple client configs.
That and ProxyJump both also require the container-host to negotiate ssh connections, which is... fine, I guess? But the port knocking approach means that the only thing the container-host is doing is port forwarding, which gives it like half an extra point in my calculus.
This is a clever trick, but I can’t help but wonder where it breaks. There seems to be an invariant that the number of backends a public key is mapped to cannot exceed the number of proxy IPs available. The scheme probably works fine if most people are only using a small number of instances, though. I assume this is in fact the case.
Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.
They just need to set the limit on the number of VMs per user to be less than or equal to the number of public IPs they have available. As long as two users don't try to share a key, you are good... which should be easy, just don't let them upload a key that another user has already uploaded.
I also wonder what happens if you want to grant access to your VM to additional public keys and one of those public keys happen to already be routed to a different VM on the same IP.
Github has a similar system and just refuses to let you addthe key if it already exists. It's hacky but it's also obviously massively widespread.
I just encountered this the other day, in fact. You cannot utilize a single SSH key with multiple GitHub accounts.
Why not "ssh undefined-behavior@exe.xyz" (naming based on the example in the blog)? That way, you would have the "Host header" as username.
Isn't this solving the problem? https://github.com/balena-io/sshproxy
Two options I use:
1. Client side: ProxyJump, by far the easiest
2. Server side: use ForceCommand, either from within sshd_config or .ssh/authorized_keys, based on username or group, and forward the connection that way. I wrote a blogpost about this back in 2012 and I assume this still mostly works, but it probably has some escaping issues that need to be addressed: https://blog.melnib.one/2012/06/12/ssh-gateway-shenanigans/
The workaround I use for my own stuff is to have a single jump-host that listens on the public IPv4 address and from there connect to the others. I can still just ssh username@namedhost (which could be username@www.websitehostedonthevm.tld, though I usually give short aliases in .ssh/config) without extra command-line options with the on-time config of adding a host entry in .ssh/config listing the required jump host and internal IP address. Connecting this way (rather than alternatives like manual multi-hop) means all my private keys stay local rather than needing to be on the jump host, without needing to muck around with a key agent.
I even do this despite having a small range of routable IPv4s pointing at home, so I don't really need to most of the time. And as an obscurity measure the jump/bastion host can only be contacted by certain external hosts too, though this does still leave my laptop as a potential single point of security failure (and of course adds latency) and one or any bot trying to get in needs to jump through a few hoops to do so.
In kinda the same situation, I was using username for host routing. And real user was determined by the principal in SSH certificate - so the proxy didn't even need to know the concrete certificates for users; it was even easier than keeping track of user SSH keys.
Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.
Can you expand on your solution a little bit? AFAIK principals don't impact the user that is logged in at all. A principal in the cert and in the authorized list just allows the user to log in as any user they want, which is why you have to write a script that validates the username before listing principals to accept.
I'd love to learn more about how you solved it and what I may be mistaken about.
What I had is roughly the following: users connects via SFTP to external.website.com@my.proxy.com. Proxy server (which handles SSH protocol itself) authenticates the user using the principal, then checks whether this principal is allowed to access an external web-site and what exactly it can do here. Then proxy connects to the external website using its own secret credentials. In the end, it solved the problem of having a shared google doc with a bunch of passwords in there which everyone had access to.
While not transparent to users, I'd just use SSH ProxyCommand like I did in https://github.com/ThomasHabets/huproxy
Not exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.
I had to reread the first paragraph several times before I understood - the author was misuing a term.
> unexpected-behaviour.exe.dev
That is not a URL, that's a fully qualified domain name (FQDN), often referred to as just 'hostname'.
I wonder if it's something like https://github.com/cea-hpc/sshproxy that sits in the middle (with decryption and everything) or if they could do this without setting up a session directly with the client.
Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.
EDIT: Ah, it's probably https://github.com/boldsoftware/sshpiper
will try to remember to look later.
Almost certainly it does, as public key auth takes place after setting up the session encryption
Wouldn't a much simpler approach be to have everyone log in to a common server which sits on a VPN with all the VMs? It introduces an extra hop, but this is a pretty minor inconvenience and can be scripted away.
They kind of already have a central point with 'ssh exe.dev', which hosts the interface for provisioning new VMs. But yeah, still one extra step for the user.
I'm building something that has to share a pool of phone numbers for SMS between many businesses with many clients and the architecture I had planned out looks a lot like this - client gets assigned a phone number from the pool for all its interactions with a certain business.
Good write up of a tricky problem, and glad to real-world validate the solution I was considering.
Host header is poorly designed builtin socks5 protocol. Use proper socks5 protocol. Its intended purpose is proxy access to inner networks, which became ubiquitous with this docker/kube/microservice thing.
Hosting DNS on the same machine as your application opens up all sorts of nice hacks. For example, you can add domain names to nf_conntrack by noticing the client resolving example.com to 10.0.0.1, then making a connection to 10.0.0.1 tcp/443. This was how I made my own “little snitch” like tool.
This would be a great use case of SSH over HTTP/3[0]. Sadly it doesn't seem to have gained traction.
[0]: https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
Initial thoughts are it's a meh protocol that does not look well thought-out, has fewer features than SSH, to the point I'm not sure it deserves to be called SSH3 and not telnet-over-websockets. Also, there's already an SSH3 https://marc.info/?l=openssh-unix-dev&m=99840513407690&w=2 so I _really_ think the thing you're thinking of is just some namesquatter assuming it has any connection to openssh or ssh.
I also know how to use SRV records so this is a non-issue for me and everyone I work with.
We all should do our part to move to IPv6, the sooner, the better.
This is a problem I've come up against a few times. Enforcing a different key per server would also help solve it in their case, but really I just want a haproxy plugin that allows selecting a backend based on the public key
Once hooked into PAM to have a central „ssh box“ mount remote boxes filesystems on user connect. Just need to have a lookup table: which username belongs to wich customer(s server). Ezpz.
I am not sure to understand what this is this achieving compared to just assigning a ip + port per vm?
Using nonstandard ports would break the `ssh foo.exe.dev` pattern.
This could also have been solved by requiring users to customize their SSH config (coder does this once per machine, and it applies to all workspaces), but I guess the exe.dev guys are going for a "zero-config, works anywhere" experience.
Zero-config usually means the complexity got shoved somewhere less visible. An SSH config is fine for one box, but with a pile of ephemeral workspaces it turns into stale cruft fast and half the entries is for hosts you forgot existed.
The port issue is also boringly practical. A lot of corp envs treat 22 as blessed and anything else as a ticket, so baking the routing into the name is ugly but I can see why they picked it, even if the protocool should have had a target name from day one.
SSH configs support wildcards, so if you couple it with a ProxyCommand you can an arbitrary level of dynamism for a host pattern (like *.exe.dev).
But yeah, everything is a trade-off.
Too bad most SSH clients don't seem to support SRV records, they would've been perfect for this:
;; Domain: mydomain.com.
;; SSH running on port 2999 at host 1.2.3.4
;; A Record
vm1928.mydomain.com. 1 IN A 1.2.3.4
;; SRV Record
_ssh._tcp.vm1928.mydomain.com. 1 IN SRV 0 0 2999 vm1928.mydomain.com.
If supported it would result in just being able to do "ssh vm1928.mydomain.com" without having to add "-p 1928"-p ?
Not needing a different port. Middleboxes sometimes block ssh on nonstandard ports. Also, to preserve the alignment between the SSH hostname and the web service hostname, as though the user was accessing a single host at a single public address. Usability is key for them.
Why would anyone configure it to do that?
Like, I understand the really restrictive ones that only allow web browsing. But why allow outgoing ssh to port 22 but not other ports? Especially when port 22 is arguably the least secure option. At that point let people connect to any port except for a small blacklist.
Middlebox operators aren't known for making reasonable or logical decisions.
Asking back, when I limit the outgoing connections from a network, why would I account for any nonstandard port and make the ruleset unwieldy, just in case someone wanted to do something clever?
A simple ruleset would only block a couple dangerous ports and leave everything else connectable. Whitelisting outgoing destination ports is more complicated and more annoying to deal with for no benefit. The only place you should be whitelisting destination ports is when you're looking at incoming connections.
I definitely block outgoing ports on all our servers by default; Established connections, HTTP(S), DNS, NTP, plus infra-specific rules. There is really no legitimate reason to connect to anything else. The benefit is defence against exfiltration.
If you're allowing direct https out, how are you stopping exfiltration?
Maybe https is routed through a monitoring proxy, but in the situation of allowing ssh the ssh wouldn't be going though one. So I still don't see the point of restricting outgoing ports on a machine that's allowed to ssh out.
You can't, reasonably. It's just a heuristic against many exploits using non-standard ports to avoid detection by proxies or traffic inspection utilities.
I’m not a network security expert, so I don’t know the threat model. I just know that this is a thing companies do sometimes.
They don't want each vm to have different public IP
Middleboxes are not relevant in this scenario.
Uh, why not? Unless your SSH client is on the same network as theirs, there are going to be middleboxes somewhere in the path.
Because your ISP should (and most do not) alter traffic.
But you’re not considering the many business environments that do.
I don't because that would be impossible. Every business has different rules. But if you (as a business) want to to use this, you will find a way to make the changes to those "middleboxes". It's not your network, it's your business's network.
Why not include header in the username field :)
Take a look at this repo: https://github.com/mrhaoxx/OpenNG
It allows you to connect multiple hosts using the same IP, for example:
ssh alice+hostA@example.com -> hostA
ssh alice+hostB@example.com -> hostB
I think that would work just fine for most use cases, though you may run into people trying to set up weird usernames on their VMs that conflict with the host split config.
Still, this is the best zero-config solution in my opinion, much simpler than the solution they decided to go with.
> SSH, on the other hand, has no equivalent of a Host header.
SSH cannot multiplex to different servers on the same host:port. But you can use multiple ports and forwarding.
You could give each machine a port number instead of a host name:
ssh-proxy:10001
ssh-proxy:10002
When you ssh to "ssh-proxy:10002" ("ssh -p 10002 ssh-proxy" wth your OpenSSH client that doesn't take host:port, sigh), it forwards that to wherever the 10002 machine currently is.It would be interesting to know why they rejected the port number solution, but the only hit for "port" in the article is in the middle of the word "important" in the sentence:
But uniform, predictable domain name behavior is important to us, so we took the time to build this for exe.dev.
You can have uniform, predictable domain + port behavior. Then you don't need a smart proxy which routes connections based on identities like public keys. Just manipulation of standard port forwarding (e.g. iptables).
jump servers, it's a thing and a good security measure.
And it's easy to create a clean 3 lines of ssh client config for the user to later just do
`ssh name`
Even less things to remember + you have documented your hostnames in the process.
The solution is ipv6.
I mean it works... but it's really ghetto. You have to handle username collisions(or enforce unique usernames). IPv4 should be non free, and that'd cover the costs...
True, BUT you can use proxycommand in sshconfig, along with wildcard matches to make this sort of thing very practical, at the cost of a single config change.
It's hard to think of a clearer example for the concept of Developer Experience.
One similar example of SSH related UX design is Github. We mostly take the git clone git@github.com/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.
The solution to this is TLS SNI redirecting.
You can front a TLS server on port 443 and then redirect without decrypting the connection based on the SNI name to your final destination host.
Im not saying its the solution I would implement but caddy's L4 module does let you do this, essentially using TLS as a tunnel and openssl in the proxy command to terminate it client side.
But... this doesn't work for SSH, which is the problem here?
SSH has ProxyCommand which accepts the %h template.
Provided your users will configure something a little - or you provide a wrapping command - you can setup the tunneling for them.
You don't need SSH. Installing an SSH server to such a VM is a hold over from how UNIX servers worked. It puts you in the mindset of treating your server as a pet and doing things for a single vm instead of having proper server management in place. I would reconsider if offering ssh is an actual requirement here or if it could be better served by offering users a proper control panel to manage and monitor the vms.
Treating your server as pet may perfectly fine. Not everything has to be fully automated cloud cluster cattle.
Even as a pet I think a proper interface for managing the server would be better and more secure than ssh.
Often those proper interfaces are wrappers around what you would run via SSH and add their own security holes, so I would argue against “more secure than SSH”.
Plenty of (cattle or pet) tooling essentially devolves to SSH under those layers of abstraction.
Could you suggest an alternative then? Something that is feature complete with SSH server, and also free.
I have not worked in the server management in many years, but with how cheap code is with AI rolling your own dashboard may not be such a bad idea.
>with SSH server
My comment was about how you do not need an ssh server. The idea of a server exposing a command line that allows potentially anything to be done is not necessary in order to manage and monitor a server.
what control panel is perfect for literally every type of project and has no edge cases