• dewey an hour ago

    I'd also suggest people to take a look at Dokku, it's a very mature project with a similar scope and was discussed here a few weeks ago:

    https://news.ycombinator.com/item?id=41358020

    I wrote up my own experiences too (https://blog.notmyhostna.me/posts/selfhosting-with-dokku-and...) and I can only recommend it. It is ~3 commands to set up an app, and one push to deploy after that.

    • FloatArtifact 13 minutes ago

      Part of me dies every time I see projects not integrating robust restoring and backup systems.

      • mimischi 44 minutes ago

        Been using dokku for probably 8 years now? (or something close to that; it used to be written entirely in bash!) Hosting private stuff on it, and an application at $oldplace probably also still runs on this solid setup. Highly recommended, and the devs are a great sport!

        • rgrieselhuber an hour ago

          I've kept a list of these tools that I've been meaning to check out. In scope, do they cover securing the instance? Is there any automation for creating networks of instances?

          • dewey an hour ago

            > In scope, do they cover securing the instance?

            Most of these I checked don't, but a recent Ubuntu version is perfectly fine to use as-is.

            > Is there any automation for creating networks of instances?

            Not that I'm aware, it would also defeat the purpose of these tools a bit that are supposed to be simple. (Dokku is "just" a shell script).

        • pqdbr 6 hours ago

          This looks really nice, congrats!

          1) I see Kamal was an inspiration; care to explain what differs from it? I'm still rocking custom Ansible playbooks, but I was planning on checking out Kamal after version 2 is released soon (I think alongside Rails 8).

          2) I see databases are in your roadmap, and that's great.

          One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.

          Even for side projects, a periodic SQL dump stored in S3 is generally not enough nowadays, and any project that gains traction will need to implement some sort of streaming backup, like Litestream (for SQLite) or Barman with streaming backup (for Postgres).

          If I may suggest this feature, having this tool to provision a Barman server in a different VPS, and automate the process of having Postgres stream to it would be game changer.

          One barman server can actually accommodate multiple database backups, so N projects could do streaming backup to one single barman server.

          Of course, there would need to be a way to monitor if the streaming is working correctly, and maybe even help the user with the restoration process. But that effectively brings RTO down to near 0 (so no data loss) and can even allow point in time restoration.

          • mightymoud 3 hours ago

            1) Kamal is more geared towards having one VPS for project - it' made for big projects really. They also show on the demo that even the db is hosted on its own VPS. Which is great! But not for me or Sidekick target audience. Kamal V2 will support multi-projects on a single VPS afaik

            2) yes yes yes! I really like litestream. Also backup is one of those critical but annoying thing that Sidekick is meant to take care of for you. I'll look into Bearman. My vision is like we would have one command for most popular db types and it would use stubs to configure everything the right way. Need to sort out docker-compose support first though...

            • indigodaddy 5 hours ago

              Pretty sure that fly.io for example supports litestream as I remember seeing some fly doc related to litestream when I was looking a few days ago for my own project. Would also make sense that they do given Litestream’s creator is currently Fly’s VP of Product (I believe).

              • ctvo an hour ago

                Yes, fly.io is associated with Litestream, but... how is that related to the above thread or this tool?

            • 4star3star 6 hours ago

              I like what I'm seeing, though I'm not sure I have a use case. On a VPS, I'll typically run a cloudflared container and configure a Cloudflare tunnel to that VPS. Then, I can expose any port and point it to a subdomain I configure in the CF dashboard. This gives https for free. I can expose services in containers or anything else running on the VPS.

              I'll concede there's probably a little more hands on work doing things this way, but I do like having a good grip on how things are working rather than leaning on a convenient tool. Maybe you could convince me Sidekick has more advantages?

              • skinner927 3 hours ago

                I must be an old simpleton, but why get cloudflare involved? You can get https for free with nginx and letsencrypt.

                • mightymoud 3 hours ago

                  It's a tunnel. So VPS can only be reached through cloudflare. It's not only for https, but more for security and lockdown

                  • mediumsmart an hour ago

                    excellent and if cloudflare thinks your IP is iranian its going to get a really secure lockdown.

                    • nine_k 9 minutes ago

                      More seriously, it also helps when you're a target of a DDoS.

                      It's always a balancing act between outsourcing your heavy lifting, and having to trust that party and depend on them.

                • SahAssar 2 hours ago

                  Are you also making sure that nothing on the VPS is actually listening on outside ports? A classic mistake is to setup something similar to what you are describing but not validating that the services are not listening on 0.0.0.0.

                  I'd also not want to have cloudflare as an extra company to trust, point of failure and configuration to manage.

                  • hu3 2 hours ago

                    Nice setup.

                    But isn't this a little too tied to Cloudflare?

                    Caddy as a reverse proxy on that VPS would also give us free HTTPS. The downside is less security because no CF tunneling.

                    • aborsy an hour ago

                      You could put Authentik in front. It does Cloudflare stuff on VPS.

                    • mightymoud 3 hours ago

                      Interesting setup....

                      How do you run the containers on your VPS tho? You could still use Sidekick for that!

                      I think your setup is one step up in security from Sidekick nonetheless. A lot more work it seems too

                      • tacone 3 hours ago

                        Interesting! How do you connect via ssh? Do you just leave the port open or is there any trick you'd like to share?

                        • vineyardmike 2 hours ago

                          I do this for “internal” apps but with Tailscale.

                        • LVB 6 hours ago

                          This looks good, and I’m a target user in this space.

                          One thing I’ve noticed is the prevalence of Docker for this type of tool, or the larger self-managed PaaS tools. I totally get it, and it makes sense. I’m just slow to adapt. I’ve been so used to Go binary deployments for so long. But I also don’t really like tweaking Caddyfiles and futzing with systemd unit files, even though the pattern is familiar to me now. Been waffling on this for quite a while…

                          • kokanee 6 hours ago

                            I'm a waffler on this as well, increasingly leaning away from containers lately. I can recall one time in my pre-Docker career when I was affected by a bug related to the fact that software developed on Mac OS ran differently than software running on CentOS in production. But I have spent untold countless hours trying to figure out various Docker-related quirks.

                            If you legitimately need to run your software on multiple OSes in production, by all means, containerize it. But in 15 years I have never had a need to do that. I have a rock solid bash script that deploys and daemonizes an executable on a linux box, takes like 2 seconds to run, and saves me hours and hours of Dockery.

                            • bantunes 5 hours ago

                              I don't understand how running a single command to start either a single container or a stack of them with compose, that then gets all the requirements in a tarball similar and just runs is seen as more complicated than running random binaries, setting values on php.ini, setting up mysql or postgres, demonizing said binaries and making sure libraries and the like are in order.

                              • hiAndrewQuinn 4 hours ago

                                You're going to be setting all that stuff up either way, though. It'll either be in a Dockerfile, or in a Vagrantfile (or an Ansible playbook, or a shell script, ...). But past a certain point you can't really get away from all that.

                                So I think it comes down to personal preference. This is going to sound a bit silly, but to me, running things in VMs feels like living in an apartment. Containers feel more like living out of a hotel room.

                                I know how to maintain an apartment, more or less. I've been living in them my whole life. I know what kinds of things I generally should and should not mess with. I'm not averse to hotels by any means, but if I'm going to spend a lot of time in a place, I will pick the apartment, where I can put all of my cumulative apartment-dwelling hours to good use.

                                • kokanee 2 hours ago

                                  Yes, thank you for answering on my behalf. To underscore this, the decision is whether to set up all of your dependencies and configurations with a tool like bash, or to set it all up within Docker, which involves setting up Docker itself, which sometimes involves setting up (and paying for) things like registries and orchestration tools.

                                  I might tweak the apartment metaphor because I think it's generous to imply that, like a hotel, Docker does everything for you. Maybe Dockerless development is like living in an apartment and working on a boat, while using Docker is like living and working on a houseboat.

                                  There is one thing I definitely prefer Docker for, and that's running images that were created by someone else, when little to no configuration is required. For example, running Postgres locally can be nicer with Docker than without, especially if you need multiple Postgres versions. I use this workflow for proofs of concepts, trials, and the like.

                                • bluehatbrit 5 hours ago

                                  I suppose like anything, it's a preference based on where the majority of your experience is, and what you're using it for. If you're running things you've written and it's all done the same way, docker probably is just an extra step.

                                  I personally run a bunch of software I've written, as well as open source things. So for me docker makes everything significantly easier, and saves me installing a lot of rubbish I don't understand well.

                                  • oarsinsync 4 hours ago

                                    After 20 years of various things breaking on my (admittedly franken) debian installs after each dist-upgrade, and spending days troubleshooting each time, I recently took the plunge and switched all services to docker-compose.

                                    I then booted into a new fresh clean debian environment, mounted my disks, and:

                                      cd /opt/docker/configs; for i in *; do cd $i; docker-compose up -d; cd ..; done
                                    
                                    voila, everything was up and working, and no longer tied to my underlying OS. Now at least I can keep my distro and kernel etc all up to date without worrying about anything else breaking.

                                    Sure, I have a new set of problems, but they feel smaller.

                                    • dijksterhuis an hour ago

                                      Thou hast discovered docker's truest use case.

                                      Like, legit, this is the whole point of docker. Application/service dependencies are no longer tied to the server it is running on, mitigating the worst parts of dependency hell.

                                      Although, in your case, I suppose your tolerance for dependency hell has been quite high ;)

                                      • Ringz an hour ago

                                        I'm doing exactly the same thing. I started to do everything on Synology with Doctor Compose and got rid of most Synology apps: through open source applications.

                                        At some point I moved individual containers to other machines and they work perfectly. VPS, NUC no matter what.

                                      • stackskipton 5 hours ago

                                        Yea, in same boat and I'm wondering if there is big contingent of devs out there that bristle at Docker. Biggest issue I run into writing my lab software is finding decent enough container registry but now I just endorse free tier of Vultr CR.

                                  • faangguyindia 4 hours ago

                                    Here's the thing, we've code running on VPS in cloud for a decade with any problem

                                    When we ran it on kubernets, without touching it, it broke itself in 3 years.

                                    Docker is fantastic developement tool, I do see real value in it.

                                    But kubernets and whole ecosystem? You must apply updates or your stuff will break one day.

                                    Currently I am using docker with docker compose and GCR, it does make things very simply and easy to develop and it's also self documenting.

                                    • mikkelam 5 hours ago

                                      There are tools like firecracker that significantly reduces docker overhead https://firecracker-microvm.github.io/

                                      I believe fly.io uses that. Not sure if OP’s tool does that

                                      • mightymoud 3 hours ago

                                        No Sidekick doesn't use firecracker. I know fly.io is built around it yes. They do that so they can put your app to sleep - basically shutting it down - then spin it up real quick when it gets a request. No place for this in Sidekick vision

                                        • indigodaddy 5 hours ago

                                          Was wondering the same— didn’t see any mention of it in the GH page though, nor even in roadmap

                                      • nhatcher an hour ago

                                        This looks fantastic TBH! Can't wait to give it a go. Congratulations. I've long thought something like this should be possible. The only thing I've done is document carefully my own steps:

                                        https://www.nhatcher.com/post/a-cto-on-a-shoestring/

                                        • tegiddrone an hour ago

                                          Looks nice! Something I'd want in front is some sort of basic app firewall like fail2ban or CrowdSec to ban vuln scanners and other intrusion attempts. It is a nice thing about Cloudflare since they provide some of this protection.

                                          • turtlebits an hour ago

                                            What about this is highly available? On a single VPS?

                                            Does this only support a single app?

                                            Nice project but the claims (production ready? Load balance on a single server?) are a bit ridiculous.

                                            • closewith an hour ago

                                              In my experience, single apps on VPSes have far higher availability in practice than the majority of convoluted deployments.

                                              • dewey an hour ago

                                                Highly available is overrated for most use cases, especially for any side projects.

                                              • gf297 28 minutes ago

                                                What's the purpose of encrypting the env file with sops, when the age secret key is stored on the VPS? If someone has access to the encrypted env file, they will also have access to the secret key, and can decrypt it.

                                                • funshed an hour ago

                                                  Nice, you should probably explain what traefik, sops and age will do. First time I've heard of sops, very handy!

                                                  • silasb 6 hours ago

                                                    Nice, I'm working in the same space as you (not opensource, personal project). We landed on the same solution, encoding the commands inside Golang and distributing those via SSH.

                                                    I'm somewhat surprised not to see this more often. I'm guessing supporting multiple linux versions could get unwieldy, I focused on Ubuntu as my target.

                                                    Differences that I see.

                                                    * I modeled mine on-top of docker-plugins (these get installed during the bootstrapping process)

                                                    * I built a custom plugin for deploying which leveraged https://github.com/Wowu/docker-rollout for zero-downtime deployments

                                                    Your solution looks much simpler than mine. I started off modeling mine off fly.io CLI, which is much more verbose Go code. I'll likely continue to use mine, but for any future VPS I'll have to give this a try.

                                                    • mightymoud 3 hours ago

                                                      hahah seems like we went down the same rabbit hole. I also considered `docker-rollout` but decided to write my own script. Heavily inspired by the docker-rollout source code btw. Just curious, why did you decide to go with docker plugins?

                                                    • bluehatbrit 6 hours ago

                                                      This is super nice, and I'm a big fan of the detailed readme with screenshots.

                                                      I'll definitely be trying it out, although I do have a pretty nice setup now which will be hard to pull away from. It's ansible driven, lets me dump a compose file in a directory, along with a backup and restore shell script, and deploys it out to my server (hetzner dedicated via server auction).

                                                      It's really nice that this handles TLS/SSL, that was a real pain for me as I've been using nginx and automating cerbot wasn't the most fun in the world. This looks a lot easier on that front!

                                                      • mightymoud 3 hours ago

                                                        Sounds like you have a great setup. My vision is to make a setup like yours more accessible really w/o having to play with low level config like ansible. I think you should try to replace nginx with Traefik - it handles certs out of the box!

                                                      • johnklos 2 hours ago

                                                        "to self-host any app"

                                                        Docker != app. Perhaps it'd be more accurate to say, "to host any Docker container"?

                                                        • joseferben 2 hours ago

                                                          this looks amazing!

                                                          i’m building https://www.plainweb.dev and i’m looking for the simplest way to deploy a plainweb/plainstack project.

                                                          looks like sidekick has the same spirit when it comes to simplicity.

                                                          in the plainstack docs i’ve been embracing fly.io, but reliability is an issue. and sqlite web apps (which is the core of plainstack) can’t have real zero downtime deployments, unless you count the proxy holding the pending request for 30 seconds while the fly machine is deployed.

                                                          i tried kamal but it felt like non-ruby and non-rails projects are second class citizens.

                                                          i was about to document deploying plainstack to dokku, but provisioning isn’t built-in.

                                                          my dream deployment tool would be dokku + provisioning & setup, sidekick looks very close to that.

                                                          definitely going to try this and maybe even have it in the blessed deploy path for plainstack if it works well!

                                                          • Hexigonz 6 hours ago

                                                            Ohhhh I like this. I really enjoy the flyctl CLI tools from Fly.io, which simplifies in a similar manner, but it's platform specific. Good work

                                                            • AndrewCopeland 5 hours ago

                                                              Its a simple cli in go It uses docker There is no k8s Handles certs Zero down time

                                                              I would love for it to support docker-compose as some of my side projects needs a library in python but I like having my service be in go, so I will wrap the python library in a super simple service.

                                                              Overall this is awesome and I love the simplicity, with the world just full of serverless, AI and a bunch of other "stuff". Paralysis through analysis is really an issue and when you are just trying to create a service for yourself or an MVP, it can be a real hinderance.

                                                              I have been gravitating towards Taskfile to perform similar tasks to this. God speed to you and keep up the great work.

                                                              • mightymoud 3 hours ago

                                                                Thanks man! I'm working on the docker-compose support. I got it working locally, but the ergonomics are really hard to get right, cus compose files are so flexible. I was even considering using the `sidekick.yaml` file as the main config and then turn that into docker compose - similar to what fly.io does with fly.toml. But I wanna keep this docker centric... so yeah I am still doing more thinking around this

                                                              • sigmonsays 2 hours ago

                                                                tools like this are pretty sweet but I would rather just run it myself.

                                                                docker-compose with a load balancer (traefik) is fairly straightforward and awesome. the TLS setup is nice but I wildcard that and just run certgen myself.

                                                                The main thing I think that's missing is some sort of authentication or zero trust system, maybe vpn tunnel provisioner. Most services I self host I do not want to be made public due to security concerns.

                                                                • Sn0wCoder 5 hours ago

                                                                  This looks great. Just bookmarked and then had to double check that I did not just bookmark it a few weeks ago. Turns out I had bookmarked Caddy which is similar but does not deploy the app and don’t think supports Docker. It was the auto CERT that was what I was interested in and what had stuck out in my mind. Have certbot setup and never think about it again, until my server needed to be rebuilt, and I started researching. Good to go for a few months, but my hosting will be up here in a year and going to switch providers and upgrade my setup to 2+ gig so I can run docker reliably. Thanks for posting this one just moved to the top of the list.

                                                                  • indigodaddy 4 hours ago

                                                                    In what sense would Caddy not support Docker? You can use caddy on the host itself to proxy to a docker container, and you could also have Caddy as a Docker container to proxy to other Docker containers (would just need an initial incoming iptables rule to the caddy container for the latter scenario— although caddy might have instructions somewhere on a more elegant way than iptables to get the connections to the Docker caddy container not sure)

                                                                  • aag 4 hours ago

                                                                    This could be great for my projects, but I'm confused about one thing: why does it need to push to a Docker registry? The Dockerfile is local, and each image is built locally. Can't the images be stored purely locally? Perhaps I'm missing something obvious. Not using a registry would reduce the number of moving parts.

                                                                    • 3np an hour ago

                                                                      You can easily set up a Docker/CNCF registry[0] container running locally. It can be run either as a caching pull-through mirror for a public registry (allowing you to easily run public containers in an environment without internet access) or as a private registry for your own image (this use-case). So if you want both features, you currently need two instances. Securing it for public use is a bit less trivial but for local use it's literally a 'run' or two.

                                                                      So you can do 'docker build -t localhost/whatever' and then 'docker run localhost/whatever'. Also worth checking out podman to more easily run everything rootless.

                                                                      If all you need is to move images between hosts like you would files, you don't even need a registry (docker save/load).

                                                                      [0]: https://distribution.github.io/distribution/

                                                                      • mightymoud 3 hours ago

                                                                        Locally here means the locally on your laptop locally, not locally on your VPS. Contrary to popular opinion, I believe your source code shouldn't be on your prod machine - a docker image is all you need. Lots of other projects push your code to VPS to build the image there then use it. I see no point in doing that...

                                                                        • sdf4j 2 hours ago

                                                                          The docker registry can be avoided by exporting/importing the docker image over ssh.

                                                                      • InvOfSmallC 2 hours ago

                                                                        Can I run more than one app on the same VPS with this solution?

                                                                        I now run more than one app into one single VPS.

                                                                        • achempion 6 hours ago

                                                                          This looks amazing, congrats on the release! Really looking forward for the database hosting feature as well (and probably networking and mounting data dirs).

                                                                          As a side note, any reason why you decided against using docker in swarm mode as it should have all these features already built it?

                                                                          • mightymoud 3 hours ago

                                                                            Correct me if I'm wrong, Docker Swarm mode is made to manage multi node clusters. This is meant for only one single VPS.

                                                                            • achempion 2 hours ago

                                                                              You can use docker swarm just for single VPS.

                                                                                - install docker 
                                                                                - run docker swarm init
                                                                                - create yaml that describes your stack (similar to docker-compose)
                                                                                - run docker stack deploy
                                                                              
                                                                              That's basically it. My go-to solution when I need to run some service on single VPS.

                                                                              If you want to just run a single container, you can also do this with `docker service create image:tag`

                                                                              • 3np an hour ago

                                                                                I thought docker-swarm had been considered neglected to the point of dead and without a future for a few years now. Is this impression incorrect/outdated?

                                                                                EDIT: So apparently what used to be known as "Docker Swarm" has been posthumously renamed to "Swarm Classic"/"Classic Swarm" and is indeed dead, abandoned, and deprecated. The project currently known as "Docker Swarm" is a younger completely different project which appears actively maintained. "Classic" still has roughly twice the GH stars and forks compared to the new one. I can't be the only one who's dismissed the latter, assuming it to be the former. Very confusing naming and branding, they would probably have more way more users if they had not repurposed the name like this.

                                                                                https://github.com/docker-archive/classicswarm

                                                                                > Swarm Classic: a container clustering system. Not to be confused with Docker Swarm which is at https://github.com/docker/swarmkit

                                                                          • spelunker 5 hours ago

                                                                            Looks great! I similarly got frustrated about the complexity of doing side-project ops stuff and messed around with Kamal, but this goes the extra mile by automatically setting up TLS as well. I'll give it a try!

                                                                            • dvaun 5 hours ago

                                                                              Awesome! Love that it's written in Go—I've recently tested the language for some use cases at work and find it great. I'll dive into your repo to see if I can learn anything new :)

                                                                              • Canada 5 hours ago

                                                                                Very well presented, the README.md looks great.

                                                                                • mightymoud 3 hours ago

                                                                                  Thanks! This comment really makes my day!

                                                                                • replwoacause 5 hours ago

                                                                                  Can’t wait to try this out..!

                                                                                  • rafaelgoncalves 5 hours ago

                                                                                    This really looks nice! Congrats!

                                                                                    • jjkmk 6 hours ago

                                                                                      Looks really good, going to test it out.

                                                                                      • devmor 6 hours ago

                                                                                        Wow this is super handy! I have paid tools that function like this for a couple of specific stacks but this seems like an amazing general purpose tool.

                                                                                        Considering the ease of setup the README purports, a few hours of dealing with this might save me a couple hundred bucks a month in service fees.

                                                                                        • mightymoud 3 hours ago

                                                                                          Glad you found this useful. Let me know if you have specific features in mind.

                                                                                          • devmor 3 hours ago

                                                                                            I didn't see anything in the readme about deploy hooks - do you have a feature that lets users run arbitrary commands after the image is deployed? I have common use cases for both pre (ex. database migrations) and post (ex. Resource caching, worker spinup) traffic switchover.

                                                                                            • mightymoud 3 hours ago

                                                                                              Yup deploy hooks are on my mind. Just didn't put them on Readme. Shouldn't be very hard to implement. Might do this first before docker-compose support.

                                                                                        • superkuh 6 hours ago

                                                                                          I don't know about you but I find the single command $ sudo apt install $x to be much faster, offers wider range of software, more reliable, less fragile, easier to network, and more secure when it comes to running applications on an Ubuntu VPS. The only thing the normal way of running applications is less good at (compared to this dependency manager manager) is "Zero downtime".

                                                                                          • LVB 6 hours ago

                                                                                            I’m not sure what you’re comparing that to. This project is about easily deploying your own app/side-project, which wouldn’t be available via apt.

                                                                                            • superkuh 6 hours ago

                                                                                              99% of what people run in docker is just normal applications.

                                                                                              • indigodaddy 4 hours ago

                                                                                                Not sure how true this statement is in general, but it’s definitely not true of course for what the project described as the use case, eg your own side project/app, which you’d obviously not be able to “apt install.” Unless OP meant like the supporting hosting/proxy infra like Apache/nginx, which yeah, that’s what this project is trying to avoid/abstract for the user to have to deal with.

                                                                                                At the end of the day if you use this tool I guess all you’d need to worry about (given the tool is stable and works obviously) would be apt upgrades of the OS and even that you can automate, and then just figure out your reboot strategy. For me, I don’t even want to deal with that, so I happily use fly.

                                                                                                • mightymoud 3 hours ago

                                                                                                  Respect! Fly is an absolute beast and to me is best in class for sure!

                                                                                            • mightymoud 3 hours ago

                                                                                              I think this is just miscommunication - I meant more in a side-project/application that you made yourself. Not an application package you install on ubuntu