• windexh8er 19 hours ago

    I've been using the LSIO Webtop images for a few years. They're awesome for composable desktops that I run behind a VPN for a quick and easy dirty connection at home.

    Combine the Webtop images by forcing it's traffic through the Gluetun [0] container and you're up and running. These Webtop containers are nice and snappy as well thanks to Kasm. Awesome OSS.

    [0] https://github.com/qdm12/gluetun

    • Havoc 12 hours ago

      This is me being rather lazy admittedly, but do you have a compose/similar for connecting them by chance?

    • yonatan8070 18 hours ago

      Could one build off this to run the desktops on the local host's display? I.e. as an IoT device with a display and some custom UI?

    • chrisweekly a day ago

      > "Warning

      Do not put this on the Internet if you do not know what you are doing.

      By default this container has no authentication and the optional environment variables CUSTOM_USER and PASSWORD to enable basic http auth via the embedded NGINX server should only be used to locally secure the container from unwanted access on a local network. If exposing this to the Internet we recommend putting it behind a reverse proxy, such as SWAG, and ensuring a secure authentication solution is in place. From the web interface a terminal can be launched and it is configured for passwordless sudo, so anyone with access to it can install and run whatever they want along with probing your local network."

      I hope everyone intrigued by this interesting and potentially very useful project takes heed of this warning.

      • satertek a day ago

        That warning applies to anything you run locally. And going further, in this day and age, I would never put up any home service without it being behind Cloudflare Access or some form of wireguard tunnel.

        • Timber-6539 17 hours ago

          Just put up basic auth infront of your services and be done with it.

          • KronisLV 14 hours ago

            I've done that in the past, even for securing the admin pages of some software (there was once an issue where the admin page auth could be bypassed, this essentially adds another layer). With TLS it's okay for getting something up and running quickly.

            Of course, for the things that matter a bit more, you can also run your own CA and do mTLS, even without any of the other fancy cloud services.

            • jazzyjackson 10 hours ago

              After coming across a brief tutorial of mTLS in this tool for locking down access to my family photo sharing [0] I have bounced around the internet following various guides but haven't ended up with a pfx file that I can install in a browser. Can you recommend any resource to understand which keys sign what, and what a client certificate is verified against?

              The guides I find often contain the openssl incantations with little explanation so I feel a bit like stumbling through the dark. I realize how much I've taken stacktraces for granted when this auth stuff is very "do or do not, there is no error"

              [0] https://github.com/alangrainger/immich-public-proxy/blob/mai...

            • baq 13 hours ago

              the fact that we have to keep reinventing kerberos all the time because it doesn't speak http is starting to legitimately annoy me.

              • rlkf 11 hours ago

                Firefox can be configured to use Kerberos for authentication (search for "Configuring Firefox to use Kerberos for SSO"); on Windows, Chrome is supposed to do so too by adding the domain as an intranet zone.

                • j16sdiz 10 hours ago

                  HTTP auth can work with kerberos.

                  Chrome, Firefox, Internet Explorer -- all support some form of kerberos auth in HTTP/HTTPS.

                  • baq 10 hours ago

                    I mean, I'm aware of SPNEGO etc. It's just that it was... ignored(?) by the startups/the community/google? Whatever little support there is is comparatively a worse experience than what we've got now for no really good reason.

                    • mschuster91 7 hours ago

                      Kerberos is old neckbeard tech, highly complex to set up, with layers upon layers of legacy garbage. Trying to get it working is ... a nightmare, I prefer even the garbagefest that is Keycloak over dealing with Kerberos. At least that just requires somewhat working DNS and doesn't barf when encountering VPNs, split horizon DNS or split tunnels.

                      The only places I've seen a working Kerberos setup outside of homelabs is universities (who can just throw endless amounts of free student labor power onto solving any IT problem) and large governments and international megacorps.

                • mschuster91 7 hours ago

                  Good luck when the TCP or SSL stack has an issue. These bugs are rare but they do exist and you're getting fucked royally if your entire perimeter defense was a basic auth prompt.

                  Windows and Linux have both had their fair share of network stack bugs, OpenSSL had Heartbleed and a few other bugs, and hell you might even run into bugs in Apache or whatever other webserver you are using.

                  • nurettin 3 hours ago

                    It would have taken several days to heartbleed your private key in 2013 if you also added fail2ban. Your home lab probably isn't on the high priority target list.

              • gbraad 6 hours ago

                I created personalized image with tailscale and kasmvnc for this particular reason, ... not on a public VPS. you can find images on my github as inspiration; do not directly copy unless you understand what you are doing.

                • hifikuno a day ago

                  Yeah, I made a mistake with my config. I had setup SWAG, with Authelia (i think?). Got password login working with 2fa. But my dumbass didn't realize I had left ports open. Logged in one day to find a terminal open with a message from someone who found my instance and got in. Called me stupid (I mean they're not wrong) and all kinds of things and deleted everything from my home drive to "teach me a lesson". Lesson painfully learnt.

                  But before that happened Webtop was amazing! I had Obsidian setup so I could have access on any computer. It felt great having "my" computer anywhere I went. The only reason I don't have it set up is because I made the mistake of closing my free teir oracle cloud thinking I could spin up a fresh new instance and since then I haven't been able to get the free teir again.

                  • elashri 19 hours ago

                    > The only reason I don't have it set up is because I made the mistake of closing my free teir oracle cloud thinking I could spin up a fresh new instance and since then I haven't been able to get the free teir again.

                    People are automating the process of requesting new arm instances on free tier [1]. You would find it near impossible to compete without playing same game

                    [1] https://github.com/mohankumarpaluru/oracle-freetier-instance...

                    • 7thpower 18 hours ago

                      Well, I know what I’m doing tomorrow when I get up.

                      • hrrsn 3 hours ago

                        I had the same thing happen to me. I tried running a script for a month without luck (Sydney region). What did work was adding a credit card to upgrade to a paid account - no issues launching an instance, and it's still covered under the free tier.

                    • Maakuth 17 hours ago

                      There are operations that put cryptominers into any unauthenticated remote desktops they can find. Ask me how I know... Way friendlier than wiping your data though.

                      • unixhero 14 hours ago

                        There are groups of people who hunt for writeable ftp servers to be used for random filesharing. At least this used to be a thing

                      • dspillett 12 hours ago

                        > Lesson painfully learnt.

                        There are actually two lessons there:

                        1. Be careful what you open to the public internet, including testing to make sure you aren't accidentally leaving open defaults as they are.

                        2. Backups. Set them up, test them, make sure someone successfully gaining access to the source box(es) can't from there wipe all the backups.

                        • doubled112 8 hours ago

                          An offline backup is incredibly inconvenient, but also very effective against shenanigans like these.

                          Also agree that backups should be "pulled" with no way to access them from the machine being backed up.

                          • dspillett 5 hours ago

                            I use a soft-offline backup for most things: sources push to an intermediate, backups pull from the intermediate, neither source not backup can touch each other directly.

                            Automated testing for older snapshots is done by verifying checksums made at backup time, and for the latest by pushing fresh checksums from both ends to the middle for comparison (anything with a timestamp older than last backup that differs in checksum indicates an error on one side or the other, or perhaps the intermediate, that needs investigating, as does any file with a timestamp that differs more than the inter-backup gap, or something that unexpectedly doesn't exist in the backup).

                            I have a real offline backups for a few key bits of data (my main keepass file, encryption & auth details for the backup hosts & process as they don't want to exist in the main backup (that would create a potential hole in the source/backup separation), etc.).

                        • nsteel 14 hours ago

                          But you can have Obsidian access from any device already if you easily setup syncing using the official method (and support the project by doing so) or one of the community plugins. Doing it this normal way avoids opening up a massive security hole too.

                          • jazzyjackson 9 hours ago

                            * any device you have admin rights to install software on, they are talking about being able to log in from any computer, not just their own

                            It surprises and annoys me that obsidian, logseq, etc don't have self hosted web front ends available. I think logseq will once they wrap up the db fork, and maybe someday we'll have nuclear fusion powerplants too.

                            • nsteel 11 minutes ago

                              Ahhh that makes perfect sense, thank you. I'm so used to always having my phone this didn't even cross my mind.

                          • 7bit 21 hours ago

                            > deleted everything from my home drive to "teach me a lesson". Lesson painfully learnt.

                            I had a mentor in my teenage year that was the same kind of person. To this day the only meaningful memory I have of him is that he was an asshole. You can teach a lesson and be empathetic towards people that make mistakes. You don't have to be an asshole.

                            • Dalewyn 21 hours ago

                              The lessons we learn best are those which we are emotionally invested in and sometimes that emotion can be negative, but a lesson will be learned regardless.

                              • ano-ther 12 hours ago

                                Sure. But you don’t have to deliberately destroy all data and be mean about it as in GP‘s case to get an emotional reaction.

                            • jillyboel 7 hours ago

                              No backups?

                            • fulafel 18 hours ago

                              Also note that their example docker config will allow anyone from the internet to connect, and even add a incoming rule in your host firewall to allow it. This is because they don't specify the port like -p 127.0.0.1:hostport:containerport (or the analog in the docker-compose config).

                              • asyx 9 hours ago

                                No they won’t. Octoprint (3d printing server) had a similar warning but they had to introduce actual user accounts to secure the system because people ignored it.

                                • macinjosh 9 hours ago

                                  If a good password is used HTTP basic auth is plenty secure over HTTPS so that everything is encrypted.

                                • bo0tzz 11 hours ago

                                  My pipedream is to have a containerized desktop environment like this that outputs directly to a physical monitor over HDMI/DP without needing an X server on the host machine. So far I haven't found any clear answers on whether that's possible at all.

                                  • jazzyjackson 9 hours ago

                                    I feel I've been nerdsniped or, some other term for your quest being contagious, I also now need to know if this is possible

                                    I found a thread from someone who seems to know what they're talking about saying it's not going to happen "on your hardware", but doesn't mention what hardware might be required

                                    https://forum.level1techs.com/t/can-intel-integrated-gpu-out...

                                    Edit actually reading that link again it sounds like a USB adapter worked right away as a monitor for the VM and the OP is asking how to prevent this ! So seems you just need to enable GPU passthrough, and a USB HDMI will appear to your VM ? Will have to try this later today

                                  • chromakode 15 hours ago

                                    Selkies[1] is another interesting project in this space. It uses webrtc for low latency streaming and remote desktop suitable for gaming in the browser.

                                    [1]: https://selkies-project.github.io/selkies-gstreamer/

                                    • mch82 16 hours ago

                                      Since the website doesn’t have pictures or videos… Is “webtop” a way to package GUI desktop apps in a Docker container so that the only dependencies to run the app are Docker Desktop and a web browser?

                                      • weitzj 15 hours ago

                                        Yes. From the documentation there are some Screenshots and this is possible. Like starting a standalone Firefox browser inside docker desktop and accessing it via a browser vnc session.

                                        But you get to control the keyboard/clipboard and it can add apparently watermarks to the vnc session for DLP functionality and you have a web http to take screenshots of your vnc sessions.

                                      • mopoke 13 hours ago

                                        And, of course, I decided to see what happened if I fired up firefox in webtop and loaded webtop in it. Oops.

                                        • Dansvidania 11 hours ago

                                          seems only natural, I sympathize

                                        • ctm92 16 hours ago

                                          Kasm [1] also has ready-to-use images that work similar. They are also customizable to contain own applications or configuration. Intended to be used with their Kasm Workspaces solution, but they also work standalone just fine.

                                          [1] https://hub.docker.com/u/kasmweb

                                        • iddan 14 hours ago

                                          Back when I was in middle school in Israel the system used for communication between teachers students and parents was called Webtop. They actually went all the way to implement OS desktop experience in the browser (this is long long ago) it was very silly but cute

                                          • mhitza a day ago

                                            Anyone have more info on this? Does it run systemd in those containers (I didn't see any systemd specific mounts).

                                            This would be interesting to try out, as docker (via compose) is a bit easier to manage than - for example - VMs with virt-manager/cockpit-machines.

                                            • r3c0nc1l3r 20 hours ago

                                              No systemd, these just start a shell script on init that launches the WM. They're based around the open-source component of this product: https://www.kasmweb.com/docs/latest/index.html

                                              I find that they are slightly more sluggish than Moonlight/Sunshine for remote streaming, but generally faster/better than x11vnc. Not quite good enough for gaming yet, but plenty for web browsing, Blender, etc.

                                            • dymk 17 hours ago

                                              We had an application that had quite the complex build process, and targeted only macOS and Linux. The mechanical engineers all used Windows, and needed to use the application. Rather than buying them macbooks or having them manage a Linux box, I wound up building something like Webtop with webvnc, and deployed containers to google cloud. Engineers could go to a URL and access the application, no need to download or install anything. It worked pretty well, all things considered.

                                              • euph0ria 11 hours ago

                                                What are some use cases for this?

                                                • hugs 8 hours ago

                                                  software testing. it's always software testing.

                                                • mcflubbins 3 hours ago

                                                  What the heck is the webtop logo? Am I just dense? I can't make out what its supposed to be? Maybe a penguin somewhere in there, but... I dunno.

                                                  https://raw.githubusercontent.com/linuxserver/docker-templat...

                                                  • fosron 21 hours ago

                                                    Why did i have to find this at 3AM. Thanks though.

                                                    • Jnr 9 hours ago

                                                      How come Gnome is not there?

                                                      • doubled112 4 hours ago

                                                        While I've never used GNOME inside a KasmVNC session, it used to feel slow on reasonably powerful machines in an Xrdp session.

                                                        That'd be my first guess.

                                                      • ranger_danger 18 hours ago

                                                        Looks very similar to neko: https://neko.m1k1o.net

                                                        • imran9m 20 hours ago

                                                          Nice. Finally easy way to test them!!

                                                          • deelowe 20 hours ago

                                                            I was just looking for something like this. Awesome!

                                                            • bitsandbooks 9 hours ago