Docker has a known security issue with port exposure in that it punches holes through the firewall without asking your permission, see https://github.com/moby/moby/issues/4737
I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.
Containers are widely used at our company, by developers who don't understand underlying concepts, and they often expose services on all interfaces, or to all hosts.
You can explain this to them, they don't care, you can even demonstrate how you can access their data without permission, and they don't get it.
Their app "works" and that's the end of it.
Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
Checklist security at it's finest.
My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees. We use an OTP for employees to verify they gave us the right email/phone number to send them to. Security sees "email/sms" and "OTP" and therefor, tickets us at the highest "must respond in 15 minutes" priority ticket every time an employee complains about having lost access to an email or phone number.
Doesn't matter that we're not sending anything sensitive. Doesn't matter that we're a team of 4 managing more than a million data points. Every time we push back security either completely ignores us and escalates to higher management, or they send us a policy document about security practices for communication channels that can be used to send OTP codes.
Security wields their checklist like a cudgel.
Meanwhile, our bug bounty program, someone found a dev had opened a globally accessible instance of the dev employee portal with sensitive information and reported it. Security wasn't auditing for those, since it's not on their checklist.
This is pretty common, developers are focused on making things that work.
Sysadmins were always the ones who focused on making things secure, and for a bunch of reasons they basically don’t exist anymore.
EDIT: what guidelines did I break?
> Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
Wow, this really hits home. I spend an inordinate amount of time dealing with false positives from cybersecurity.
There are certainly people that don't care about security out there, but the biggest issue is just how much people are expected to know.
Docker, AWS, Kubernetes, some wrapper they've put around Kubernetes, a bunch of monitoring tools, etc.
And none of it will be their main job, so they'll just try to get something working by copying a working example, or reading a tutorial.
Turns out devsecops was just a layoff scheme for sysadmins
"Everybody gangsta 'bout infosec until their machine is cryptolockered." (some CISO, probably).
I wonder how many people realize you can use the whole 127.0.0.0/8 address space, not just 127.0.0.1. I usually use a random address in that space for all of a specific project's services that need to be exposed, like 127.1.2.3:3000 for web and 127.1.2.3:5432 for postgres.
Be aware that there is an effort to repurpose most of 127.0.0.0/8: https://www.ietf.org/archive/id/draft-schoen-intarea-unicast...
It’s well-intentioned, but I honestly believe that it would lead to a plethora of security problems. Maybe I am missing something, but it strikes me as on the level of irresponsibility of handing out guardless chainsaws to kindergartners.
Also a great way around code that tries to block you from hitting resources local to the box. Lots of code out there in the world blocking the specific address "127.0.0.1" and maybe if you were lucky "localhost" but will happily connect to 127.6.243.88 since it isn't either of those things. Or the various IPv6 localhosts.
Relatedly, a lot of systems in the world either don't block local network addresses, or block an incomplete list, with 172.16.0.0/12 being particularly poorly known.
Also, many people don’t remember that those zeros in between numbers in IPs can be slashed, so pinging 127.1 works fine. This is also the reason why my home network is a 10.0.0.0/24—don’t need the bigger address space, but reaching devices at 10.1 sure is convenient!
TIL I always thought it was /32
We’ve found this out a few times when someone inexperienced with docker would expose a redis port and run docker compose up on a public accessible machine. Would only be minutes until that redis would be infected. Also blame redis for having the ability to run arbitrary code without auth by default.
-p 127.0.0.1: might not offer all of the protections the way you would expect, and is arguably a bug in dockers firewall rules they're failing to address. they choose to instead say hey we dont protect against L2, and have an open issue here: https://github.com/moby/moby/issues/45610.
this secondary issue with docker is a bit more subtle, it's that they don't respect the bind address when they do forwarding into the container. the end result is that machines one hop away can forward packets into the docker container.
for a home user the impact could be that the ISP can reach into the container. depending on risk appetite this can be a concern (salt typhoon going after ISPs).
more commonly it might end up exposing more isolated work related systems to related networks one hop away
What about cloud VMs? I would love to read more about "they don't respect the bind address when they do forwarding into the container" and "machines one hop away can forward packets into the docker container" if you could be so kind!
Upd: thanks for a link, looks quite bad. I am now thinking that an adjacent VM in a provider like Hetzner or Contabo could be able to pull it off. I guess I will have to finally switch remaining Docker installations to Podman and/or resort to https://firewalld.org/2024/11/strict-forward-ports
And this was one of the reason why I switched to Podman. I haven't looked back since.
I want to use Podman but I keep reading the team feels podman-compose to be some crappy workaround they don’t really want to keep.
This is daunting because:
Take 50 random popular open source self-hostable solutions and the instructions are invariably: normal bare installation or docker compose.
So what’s the ideal setup when using podman? Use compose anyway and hope it won’t be deprecated, or use SystemD as Podman suggests as a replacement for Compose?
Tbh I prefer not exposing any ports directly, and then throwing Tailscale on the network used by docker. This automatically protects everything behind a private network too.
FOSS alternative is to throw up a $5 VPS on some trusted host, then use Wireguard (FOSS FTW) to do basically exactly the same, but cheaper, without giving away control and with better privacy.
There is bunch of software that makes this easier than trivial too, one example: https://github.com/g1ibby/auto-vpn/
I agree, Tailscale FTW! You didn't even need to integrate it with docker. Just add a subnet route and evening just works. It's a great product.
Important to note that, even if you use Tailscale, the firewall punching happens regardless, so you still have to make sure you either:
1. Have some external firewall outside of the Docker host blocking the port
2. Explicitly tell Docker to bind to the Tailscale IP only
I would love to read a write-up on that! Are you doing something like https://tailscale.com/blog/docker-tailscale-guide ?
Another option is using Cloudflare Tunnels (`cloudflared`), and stacking Cloudflare Access on top (for non-public services) to enforce authentication.
I avoid most docker problems by running unprivileged containers via rootless podman, on a rocky-linux based host with selinux enabled.
At this point docker should be considered legacy technology, podman is the way to go.
Would that actually save you in this case? OP had their container exposed to the internet, listening for incoming remote connections. Wouldn't matter in that case if you're running a unprivileged container, podman, rocky-linux or with selinux, since everything is just wide open at that point.
The ports thing is what convinced me to transition to Podman. I don't need a container tool doing ports on my behalf.
Why am I running containers as a user that needs to access the Docker socket anyway?
Also, shoutout to the teams that suggest easy setup running their software in a container by adding the Docker socket into its file system.
I am not a security person at all. Are you really saying that it could potentially cause Iptables to open ports without an admin's knowing? Is that shockingly, mind-bogglingly bad design on Docker's part, or is it just me?
Worse, the linked bug report is from a DECADE ago, and the comments underneath don't seem to show any sense of urgency or concern about how bad this is.
Have I missed something? This seems appalling.
Your understanding is correct, unfortunately. Not only that, the developers are also reluctant to make 127.0.0.1:####:#### the default in their READMEs and docker-compose.yml files because UsEr cOnVeNiEnCe, e.g. https://github.com/louislam/uptime-kuma/pull/3002 closed WONTFIX
Correct. This can be disabled [0] but you need to work around this then. Usually you can "just" use host-networking and manage iptable rules manually. Not pretty but in that case you at least know what's done to the system and iptables.
[0] https://docs.docker.com/engine/network/packet-filtering-fire...
To run Docker, you need to be an admin or in the Docker group, which warns you that it is equivalent to having sudo rights, AKA be an admin.
As for it not being explicitly permitted, no ports are exposed by default. You must provide the docker run command with -p, for each port you want exposed. From their perspective, they're just doing exactly what you told them to do.
Personally, I think it should default to giving you an error unless you specified what IPs to listen to, but this is far from a big of an issue as people make it out to be.
The biggest issue is that it is a ginormous foot gun for people who don't know Docker.
> without an admin's knowing
For people unfamiliar with Linux firewalls or the software they're running: maybe. First of all, Docker requires admin permissions, so whoever is running these commands already has admin privileges.
Docker manages its own iptables chain. If you rely on something like UFW that works by using default chains, or its own custom chains, you can get unexpected behaviour.
However, there's nothing secret happening here. Just listing the current firewall rules should display everything Docker permits and more.
Furthermore, the ports opened are the ones declared in the command line (-p 1234) or in something like docker-compose declarations. As explained in the documentation, not specifying an IP address will open the port on all interfaces. You can disable this behaviour if you want to manage it yourself, but then you would need some kind of scripting integration to deal with the variable behaviour Docker sometimes has.
From Docker's point of view, I sort of agree that this is expected behaviour. People finding out afterwards often misunderstand how their firewall works, and haven't read or fully understood the documentation. For beginners, who may not be familiar with networking, Docker "just works" and the firewall in their router protects them from most ills (hackers present in company infra excluded, of course).
Imagine having to adjust your documentation to go from "to try out our application, run `docker run -p 8080 -p 1234 some-app`" to "to try out our application, run `docker run -p 8080 -p 1234 some-app`, then run `nft add rule ip filter INPUT tcp dport 1234 accept;nft add rule ip filter INPUT tcp dport 8080 accept;` if you use nftables, or `iptables -A INPUT -p tcp --dport 1234 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT; iptables -A INPUT -p tcp --dport 8080 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT` if you use iptables, or `sudo firewall-cmd --add-port=1234/tcp;sudo firewall-cmd --add-port=8080/tcp; sudo firewall-cmd --runtime-to-permanent` if you use firewalld, or `sudo ufw allow 1234; sudo ufw allow 8080` if you use UFW, or if you're on Docker for Windows, follow these screenshots to add a rule to the firewall settings and then run the above command inside of the Docker VM". Also don't forget to remove these rules after you've evaluated our software, by running the following commands: [...]
Docker would just not gain any traction as a cross-platform deployment model, because managing it would be such a massive pain.
The fix is quite easy: just bind to localhost (specify -p 127.0.0.1:1234 instead of -p 1234) if you want to run stuff on your local machine, or an internal IP that's not routed to the internet if you're running this stuff over a network. Unfortunately, a lot of developers publishing their Docker containers don't tell you to do that, but in my opinion that's more of a software product problem than a Docker problem. In many cases, I do want applications to be reachable on all interfaces, and having to specify each and every one of them (especially scripting that with the occasional address changes) would be a massive pain.
For this article, I do wonder how this could've happened. For a home server to be exposed like that, the server would need to be hooked to the internet without any additional firewalls whatsoever, which I'd think isn't exactly typical.
> Is that shockingly, mind-bogglingly bad design on Docker's part, or is it just me?
This the default value for most aspects of Docker. Reading the source code & git history is a revelation of how badly things can be done, as long as you burn VC money for marketing. Do yourself a favor and avoid all things by that company / those people, they've never cared about quality.
Not a security issue. Docker explain it very clearly on their documentation. https://docs.docker.com/engine/network/packet-filtering-fire...
If you have opened up a port in your network to the public, the correct assumption is to direct outside connections to your application as per your explicit request.
Interesting trick, it was definitely an "Oh shit" moment when I learned the hard way that ports were being exposed. I think setting an internal docker network is another simple fix for this, though it complicates talking to other machines in the same firewall.
I use rootless docker and then also do this for my traefik container. Then, a bit of nftables config to expose to the outside world.
From the linked issue
> by running docker images that map the ports to my host machine
If you start a docker container and map port 8080 of the container to port 8080 on the host machine, why would you expect port 8080 on the host machine to not be exposed?
I don't think you understand what mapping and opening a port does if you think that when you tell docker to expose a port on the host machine that it's a bug or security issue when docker then exposes a port on the host machine...
docker supports many network types, vlans, host attached, bridged, private, etc. There are many options available to run your containers on if you don't want to expose ports on the host machine. A good place to start: If you don't want ports exposed on the host machine then probably should not start your docker container up with host networking and a port exposed on that network...
Regardless of that, your container host machines should be behind a load balancer w/ firewall and/or a dedicated firewall, so containers poking holes (because you told them to and then got mad at it) shouldn't be an issue
I think the unintuitive thing is that by "port mapping", Docker is doing DNAT which doesn't trigger the input firewall rules. Unless you're relatively well versed in the behavior of iptables or notables, you probably expect the "port mapping" to work like a regular old application proxy (which would obey a firewall rules blocking all inputs) and not use NAT and firewall rules (and all of the attendant complexity that brings).
It only exposes ports if you pass the command-line flag that says to do so. How is that "without asking your permission"?
It should have the proxy set up but leave opening the port to the user.
No other sever software that I know of touches the firewall to make its own services accessible. Though I am aware that the word being used is "expose". I personally only have private IPs on my docker hosts when I can and access them with wireguard.
securing is straightforward, too bad it's not by default: https://docs.docker.com/engine/network/packet-filtering-fire...
Do I understand the bottom two sections correctly? If I am using ufw as a frontend, I need to switch to firewalld instead and modify the 'docker-forwarding' policy to only forward to the 'docker' zone from loopback interfaces? Would be good if the page described how to do it, esp. for users who are migrating from ufw.
More confusingly, firewalld has a different feature to address the core problem [1] but the page you linked does not mention 'StrictForwardPorts' and the page I linked does not mention the 'docker-forwarding' policy.
This is only an issue if you run Docker on your firewall, which you absolutely should not.
> "None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet."
This prompts a reflection about, as an industry, we should make a better job in providing solid foundations.
When I check tutorials on how to drill in the wall, there is (almost) no warning about how I could lose a finger doing so. It is expected that I know I should be careful around power tools.
How do we make some information part of the common sense? "Minimize the surface of exposure on the Internet" should be drilled in everyone, but we are clearly not there yet
I don't think it's that unreasonable for a database guide not to mention it. This is more of a general server/docker security thing. Just as I wouldn't expect an application guide to tell me not to use windows xp because it's insecure.
Most general guides on the other hand regarding docker mention not to expose containers directly to the internet and if a container has to be exposed to do so behind a reverse proxy.
> as an industry, we should make a better job in providing solid foundations.
Here is the fundamental confusion: programming is not an industry, it is a (ubiquitous) type of tooling used by industries.
Software itself is insecure in its tooling and in its deployment. So we now have a security industry struggling to improve software.
Some software companies are trying to improve but software in the $cloud is just as big a mess as software on work devices and personal devices.
> When I check tutorials on how to drill in the wall, there is (almost) no warning about how I could lose a finger doing so. It is expected that I know I should be careful around power tools.
I think the analogy and the example work better when the warning is that you should be careful when drilling in walls because there may be an electrical wire that will be damaged.
Probably the "we can do everything and anything right now easy peasy, for serious of just just for the heck of it" attitude needs to be dialed down. The industry promises the heavens and devilish charm while releasing not even half cooked unnecessary garbage sometimes, that has bells and whistles to distract from the poor quality and not thought through, rushed illusions, that can chop all your imaginary limbs off in a sidestep or even without complete uninterrupted attention.
Things potentially making big trouble like circular saw tables have prety elaborate protection mechanisms built in. Rails on high places, seatbelt, safety locks come to mind as well of countless unmentioned ones protecting those paying attention and those does not alike. Of course, decades of serious accidents promted these measures and mostly it is regulated now not being a courtesy of the manufacturer, other industries matured to this level. Probably IT industry needs some growing up still and less children playing adults - some kicking in the ass for making so rubishly dangerous solutions. Less magic, more down to earth reliability.
Between MongoDB running without a password by default and quick start guides brushing over anything security related, the industry can use a more security-conscious mindset.
However, security is hard and people will drop interest in your project if it doesn't work automatically within five minutes.
The hard part is at what experience level the warnings can stop. Surely developer documentation doesn't need the "docker exposes ports by default" lesson repeated every single time, but there are a _lot_ of "beginner" tutorials on how to set up software through containers that ignore any security stuff.
For instance, when I Google "how to set up postgres on docker", this article was returned, clearly aimed at beginners: https://medium.com/@jewelski/quickly-set-up-a-local-postgres... This will setup a simply-guessable password on both postgres and pgadmin, open from the wider network without warning. Not so bad when run on a VM or Linux computer, quite terrible when used for a small project on a public cloud host.
The problems caused by these missing warnings are almost always the result of lacking knowledge about how Docker configures it networks, or how (Linux) firewalls in general work. However, most developers I've met don't know or care about these details. Networking is complicated beyond the bare basics and security gets in the way.
With absolutely minimal impact on usability, all those guides that open ports to the entire internet can just prepend 127.0.0.1 to their port definitions. Everyone who knows what they're doing will remove them when necessary, and the beginners need to read and figure out how to open ports if they do want them exposed to the internet.
That's an interesting take away, I just quoted the exact same line from the blob to a friend with my response being
> why didn't somebody stop me?!
I'm not sure if "the industry" has a problem with relaying the reality that: the internet is full of malicious people that will try to hack you.
My take away was closer to. The author knew better but thought some mix of 1) no one would provide incomplete information 2) I'm not a target 3) containers are magic, and are safe. I say that because they admit as much immediately following.
> Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.
Just like people shouldn't just buy industrial welding machines, SCUBA equipment or a parachute and "wing it" I think the same can be said here.
As a society we already have the structures setup: The author had been more than welcome to attend a course or a study programme in server administration that would prepare them to run their own server.
I myself even wouldn't venture into exposing a server to the internet to maintain it in my freetime, and that is with a post graduate degree in an engineering field and more than 20 years of experience.
It is widely known not to expose anything to the public internet unless it's hardened an/or sandboxed. A random service you use for playing around definitely does not meet this description and most people do know that, just like what a power tool can do with your fingers.
Tailscale is a great solution for this problem. I too run homeserver with Nextcloud and other stuff, but protected behind Tailscale (Wireguard) VPN. I can't even imagine exposing something like my family's personal data over internet, no matter how convenient it is.
But I sympathize with OP. He is not a developer and it is sad that whatever software engineers produce is vulnerable to script kiddies. Exposing database or any server with a good password should not be exploitable in any way. C and C++ has been failing us for decades yet we continue to use such unsafe stacks.
> C and C++ has been failing us for decades yet we continue to use such unsafe stacks.
I'm not sure — what do C and C++ have to do with this?
C and C++ is not accountable for all evil of the world. Yes I know, some Rust evangelists want to tell us that, but most servers get owned through configuration mistakes.
Thanks, I've got a homelab/server with a few layers of protection right now, but had been wanting to just move to a vpn based approach - this looks really nice and turnkey, though I dislike the requirement of using a 3P IDP. Still, promising. Cheers.
If you make a product that is so locked down by default that folks need to jump through 10 hoops before anything works then your support forums will be full of people whining that it doesn't work and everybody goes to the competition that is more plug and play.
Realize why Windows still dominates Linux on the average PC desktop? This is why.
I just switched to Tailscale for my home server just before the holidays and it has been absolutely amazing. As someone who knows very little about networking, it was pretty painless to set up. Can’t really speak to the security of the whole system, but I tried my best to follow best practices according to their docs.
There are a lot of vulnerability categories. Memory unsafety is the origin of some of them, but not all.
You could write a similar rant about any development stack and all your rants would be 100% unrelated with your point: never expose a home-hosted service to the internet unless you seriously know your shit.
What do you need Tailscale for? Why isn't Wireguard enough?
c and c++ are not failing us and are not unsafe.
For all intents and purposes, the only ports you should ever forward are ones that are explicitly designed for being public facing, like TLS, HTTP, and SSH. All other ports should be closed. If you’re ever reaching for DMZ, port forwarding, etc., think long and hard about what you’re doing. This is a perfect problem for Tailscale or WireGuard. You want a remote database? Tailscale.
I even get a weird feeling these days with SSH listening on a public interface. A database server, even with a good password/ACLs, just isn’t a great safe idea unless you can truly keep on top of all security patches.
Good time to make sure UPnP is not enabled. Its an authenticationless protocol. Yeah, you read that right, no auth needed.
I think I'm missing something here - what is specific about Docker in the exploit? Nowhere is it mentioned what the actual exploit was, and whether for example a non-containerized postgres would have avoided it.
Should the recommendation rather be "don't expose anything from your home network publically unless it's properly secured"?
From TFA:
> This was somewhat releiving, as the latest change I made was spinning up a postgres_alpine container in Docker right before the holidays. Spinning it up was done in a hurry, as I wanted to have it available remotely for a personal project while I was away from home. This also meant that it was exposed to the internet, with open ports in the router firewall and everything. Considering the process had been running for 8 days, this means that the infection occured just a day after creating the database. None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet. Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.
Seems like they opened up a postgres container to the Internet (IIRC docker does this whether you want to or not, it punches holes in iptables without asking you). Possibly misconfigured authentication or left a default postgres password?
This is one that can sneak up on you even when you're not intentionally exposing a port to the internet. Docker manages iptables directly by default (you can disable it but the networking between compose services will be messed up). Another common case this can bite you is if using an iptables front-end like ufw and thinking you're exposing just the application. Then unless you bind to localhost then Posgres in this case will be exposed. My recommendation is to review iptables -L directly and where possible use firewalls closer to the perimeter (e.g. the one from your vps provider) instead of solely relying on iptables on the same node
I really like the "VPN into home first" philosophy of remote access to my home IT. I was doing openvpn into my ddwrt router fortunately years, and now it's wireguard into openwrt. It's quite easy for me to vpn in first and then do whatever: check security cams, control house via home assistant, print stuff, access my zfs shared drive, run big scientific simulations or whatever on big computer, etc. The router VPN endpoint is open to attack but I think it's a relatively small attack surface.
> I think it's a relatively small attack surface.
Plus, you can obfuscate that too by using a random port for Wireguard (instead of the default 51820): if Wireguard isn't able to authenticate (or pre-authenticate?) a client, it'll act as if the port is closed. So, a malicious actor/bot wouldn't even know you have a port open that it can exploit.
I use WireGuard to access all in-home stuff as well, but there is one missing feature and one bug with the official WireGuard app for android that is inconvenient:
- Missing feature; do not connect when on certain SSIDs. - Bug: When the WG connection is active and I put my phone in Flightmode (which I do every night), it drains the battery from full to almost empty during the night.
I've taken this approach as well. The WireGuard clients can be configured to make this basically transparent based on what SSID I'm connected to. I used to do similar with IPSec/IKEv2, but WireGuard is so much easier to manage.
The only thing missing on the client is Split DNS. With my IPSec/IKEv2 setup, I used a configuration profile created with Apple Configurator plus some manual modifications to make DNS requests for my internal stuff go through the tunnel and DNS requests for everything else go to the normal DNS server.
My compromise for WireGuard is that all DNS does to my home network but only packets destined for my home subnets go through the tunnel.
> Fortunately, despite the scary log entries showing attempts to change privileges and delete critical folders, it seemed that all the malicious activity was contained within the container.
OP can't prove that. The only way is to scrap the server completely and start with a fresh OS image. If OP has no backup and ansible repo (or anything similar) to configure a new home server quickly, then I guess another valuable lesson was learned here.
Not 100% what you mean with "scrapping" the server, you suggest just a re-install OS? I'd default to assuming the hardware itself is compromised somehow, if I'm assuming someone had root access. If you were doing automated backups from something you assume was compromised, I'm not sure restoring from backups is a great idea either.
A lot of people comment about Docker firewall issues. But it still doesn't answer how an exposed postgres instance leads to arbitrary code execution.
My guess is that the attacker figured out or used the default password for the superuser. A quick lookup reveals that a pg superuser can create extension and run some system commands.
I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.
> I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.
Takeaway for beginner application hosters (aka "webmasters") is to never expose something on the open internet unless you're 100% sure you absolutely have to. Everything should default to using a private network, and if you need to accept external connections, do so via some bastion host that isn't actually hosted on your network, which reaches into your private network via proper connections.
Ok - curious if anyone can provide some feedback for me on this.
I am running Immich on my home server and want to be able to access it remotely.
I’ve seen the options of using wireguard or using a reverse proxy (nginx) with Cloudflare CDN, on top of properly configured router firewalls, while also blocking most other countries. Lots of this understanding comes from a YouTube guide I watched [0].
From what I understand, people say reverse proxy/Cloudflare is faster for my use case, and if everything is configured correctly (which it seems like OP totally missed the mark on here), the threat of breaches into to my server should be minimal.
Am I misunderstanding the “minimal” nature of the risk when exposing the server via a reverse proxy/CDN? Should I just host a VPN instead even if it’s slower?
Obviously I don’t know much about this topic. So any help or pointing to resources would be greatly appreciated.
If you care about privacy I wouldn't even consider using Cloudflare or any other CDN because they get to see your personal data in plain "text". Can you can forward a port from the internet to your home network, or are you stuck in some CG-NAT hell?
If you can, then you can just forward the port to your Immich instance, or put it behind a reverse proxy that performs some sort of authentication (password, certificate) before forwarding traffic to Immich. Alternatively you could host your own Wireguard VPN and just expose that to the internet - this would be my preferred option out of all of these.
If you can't forward ports, then the easiest solution will probably be a VPN like Tailscale that will try to punch holes in NAT (to establish a fast direct connection, might not work) or fall back to communicating via a relay server (slow). Alternatively you could set up your own proxy server/VPN on some cheap VPS but that can quickly get more complex than you want it to be.
You don't need any of this, and the article is completely bogus, having a port forwarded to a database in a container is not a security vulnerability, unless the database has a vulnerability. The article fails to explain how they actually got remote code execution, and blames it on some docker container vulnerability, and links to a random article as a source that has nothing to do with what he is claiming in the article.
What you have to understand is that having an immich instance on the internet is only a security vulnerability if immich itself has a vulnerability in it. Obviously, this is a big if, so if you want to protect against this scenario, you need to make sure only you can access this instance, and you have a few options here that don't involve 3rd parties like cloudflare. You can make it listen only on the local network, and then use ssh port tunneling, or you can set up a vpn.
Cloudflare has been spamming the internet with "burglars are burgling in the neighbourhood, do you have burglar alarms" articles, youtube is also full of this.
Reverse proxy is pretty good - you've isolated the machine from direct access so that is something.
I'm in the same boat. I've got a few services exposed from a home service via NGINX with a LetsEncrypt cert. That removes direct network access to your machine.
Ways I would improve my security:
- Adding a WAF (ModSecurity) to NGINX - big time investment!
- Switching from public facing access to Tailscale only (Overlay network, not VPN, so ostensibly faster). Lots of guys on here do this - AFAIK, this is pretty secure.
Reverse proxy vs. Overlay network - the proxy itself can have exploitable vulnerabilities. You should invest some time in seeing how nmap can identify NGINX services, and see if those methods can be locked down. Good debate on it here:
https://security.stackexchange.com/questions/252480/blocking...
Piggybacking on your request, I would also like feedback. I also run some services on my home computer. The setup I'm currently using is a VPN (Wireguard) redirecting a UDP port from my router to my PC. Although I am a Software Engineer, I don't know much about networks/infra, so I chose what seemed to me the most conservative approach.
Well, you are better off using Google Photos for securely accessing your photos over Internet. It is not a matter of securing it once, but one of keeping it secure all the time.
The exploit is not known in this case. The claim that it was password protected seems like an unverified statement. No pg_hba.conf content provided, this docker must have a very open default config for postgres.
`Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.`
Despite people slating the author, I think this is a reasonable oversight. On the surface, spinning up a Postgres instance in Docker seems secure because it’s contained. I know many articles claim “Docker= Secure”.
Whilst easy to point to common sense needed, perhaps we need to have better defaults. In this case, the Postgres images should only permit the cli, and nothing else.
Every guide out there says to link Postgres to the application (the one using Postgres). So the Postgres network is not reachable. Then, even if it were exposed, a firewall would need to be configured to allow access. Then, another thing every guide does is suggesting a reverse proxy, decreasing attack service. Then, such reverse proxy would need some kind of authentication. Instead, I simply run it behind Wireguard. There's still plenty to go wrong, such as backdoor in Postgres database image (you used docker pull), not upgrading it while it contains serious vulnerabilities, or a backdoor in some other image.
> spinning up a Postgres instance in Docker seems secure because it’s contained
This doesn't make any sense. Running something in a container doesn't magically make it "secure." Where does this misconception come from?
With the advent of wireguard. I no longer see a reason to expose anything outside of that to the outside world. Just VPN back in and work like usual.
I’d recommend using something like Tailscale for these use cases and general access, there’s no need to expose services to the internet much of the time.
Tailscale also has funnel which would be more secure for public exposure.
There usual route that people would take is either use VPN/tailscale/Clouflare Tunnels ..etc and only expose things locally and you will need to be on VPN network to access services. The other route is not to expose any ports and rely on reverse proxy. Actually you can combine the two approaches and it is relativity easy for non SWE homelab hobbyists.
If anyone is looking for one, https://nginxproxymanager.com/
Been using it for years and it’s been solid.
I use HAProxy on PFSense to expose a home media server (among other services) for friends to access. It runs on a privileged LXC (because NFS) but as an unprivileged user.
Is this reckless? Reading through all this makes me wonder if SSHFS (instead of NFS) with limited scope might be necessary.
Im curious about what the password for postgres looked like.
This exact thing happened to a friend, but there was no need to exploit obscure memory safety vulnerabilities, the password was “password”.
My friend learnt that day that you can start processes in the machine from Postgres.
My self-hosting setup uses Cloudflare tunnels to host the website without opening any ports. And Tailscale VPN to directly access the machine. You may want to look at it!
I had ~500 kbps sustained attacks against my residential ISP for years.
I learned how to do everything through SSH - it is really an incredible Swiss Army knife of a tool.
I remember working at a non-tech company and their firewall blocked anything not on port 22, 80 and 443. I got real good at port forwarding with ssh.
I was exposing my services the same way for a long time, now I only expose web services via cloudflare, with an iptable configuration to reject everything on port 443 not coming from them.
I also use knockd for port knocking to allow the ssh port, just in case I need to log in to my server without having access to one of my devices with Wireguard, but I may drop this since it doesn't seem very useful.
Docker is a terrible self-hosting tool reason #4056. Let's give everyone a footgun and recommend they use it at work and their house.
Especially with a tool you don't have an enterprise class firewall in front of, security needs to be automatic, not afterthought.
Every server gets constantly probed for SSH. Since half the servers on the Internet haven't been taken over yet, it doesn't seem like SSH has significant exploits (well, there was that one signal handler race condition).
Unless you're trying to do one of those designs that cloud vendors push to fully protect every single traffic flow, most people have some kind of very secure entry point into their private network and that's sufficient to stop any random internet attacks (doesn't stop trojans, phishing, etc). You have something like OpenSSH or Wireguard and then it doesn't matter how insecure the stuff behind that is, because the attacker can't get past it.
It's also common practice to do what everyone here recommends and out things behind a firewall.
The seperation of control and function has been a security practice for a long time.
Port 80 and 443 can be open to the internet, and in 2024 whatever port wireguard uses. All other ports should only be accessible from the local network.
With VPS providers this isn't always easy to do. My preferred VPS provider. However provides a separate firewall which makes that easier.
OpenSSH has no currently known flaws but in past it contained a couple. For example, the xz backdoor utilized OpenSSH and it has contained a remote vulnerability in past (around 2003). Furthermore, some people use password auth as well as weak (low entropy or reused, breached) passwords. Instead, only use public key authentication. And tarpit the mofos brute forcing SSH (e.g. with fail2ban). They always do it on IPv4, not IPv6. So another useful layer (aside from not using IPv4) is whitelist IPv4 addresses who require access to SSH server. There is no reason for the entire world to need access to your home network's SSH server. Or, at the very least, don't use port 22. When in doubt: check your logs.
It is more likely that the container image was already pwned, as other users had the same issue, see: https://unix.stackexchange.com/questions/757120/how-was-my-p....
Edit: It can also happen with the official image: https://www.postgresql.org/about/news/cve-2019-9193-not-a-se...
“ None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet.”
Unless you want it publicly available and expect people to try and illegitimately access it, never expose it to the internet.
Almost the same thing happened to me 2 years ago[1]. I didn’t understand the docker/ufw footgun at that time.
How is it that your router allowing the incoming connection? It doesn't make sense.
> None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet.
Um that's pretty ... naïve?
Closing even your VPN port goes a little far. Wireguard for example doesn't even reply on the port if no acceptable signature is presented. Assuming the VPN software is recent, that's as close as you can come to having everything closed.
I'd kinda like a VPN with my own storage but gave up this is one of the main reasons. I just dont have time to keep an eye on all this stuff.
The article misses one critical point in these attacks:
practically all these attacks require downloading remote files to the server once they gain access, using curl, wget or bash.
Restricting arbitrary downloads from curl, wget or bash (or better, any binary) makes these attacks pretty much useless.
Also these cryptominers are usually dropped to /tmp, /var/tmp or /dev/shm. They need internet access to work, so again, restricting outbound connections per binary usually mitigates these issues.
https://www.aquasec.com/blog/kinsing-malware-exploits-novel-...
> Restricting arbitrary downloads from curl, wget or bash (or better, any binary) makes these attacks pretty much useless.
Any advice what that looks like for a docker container? My border firewall isn't going to know what binary made the request, and I'm not aware of per-process restrictions of that kind
Docker's poor design strikes again!
"None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet."
Really? Nor any Docker guides warning in general about exposing a container to the Internet?
a simple docker run --memory=512m --cpus=1 could have at least limited the damage here.
I don't think so; it would have made the malware harder to spot.
> None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet.
This is like IT security 101.
> A quick IP search revealed that it was registed with a Chinese ISP and located in the Russian Federation.
Welcome to the Internet. Follow me. Always block China and Russia from any of your services. You are never going to need them and the only thing that can come from them is either a remote attack or unauthorized data sent to them.
> Always block China and Russia from any of your services.
But does this add much security if bad actors are using VPNs ?
Yes, but go further: Are you ever using your home server outside your own country? If the answer is no, block everything but your country. You can remove the vast majority of your attack surface by just understanding where you need to be exposed.
Yeah, the problem today is that there are many guides on MVPing software which don't tell you basic security.
The guy doesn't have a software background.
This is basically the problem with the 'everyone-should-code' approach - a little knowledge can be dangerous.
Blocking China and Russia adds no security. Attacks can come from anywhere. And when you start blocking IPs, you may accidentally block yourself.
Your first mistake is allowing chinese/russian traffic to your server...