• nerevarthelame a day ago

    This is the first time I've heard of slopsquatting, but it does seem like a major and easily exploitable risk.

    However, blocking an email domain will dissuade only the lowest effort attacker. If the abusers think slopsquatting is effective, they'll easily be able to find (or create) an alternative email provider to facilitate it.

    And assuming that the attacks will persist, sometimes it's better to let them keep using these massive red flags like an inbox.ru email so that it remains a reliable way to separate the the fraudulent from legitimate activity.

    • halJordan a day ago

      Of course this is true. It's the worst reason to denigrate a proactive measure. Speeders buy radar detectors. Wife beaters buy their wife long sleeves. This complaint is levied all the time by everyone which makes it low effort and not useful.

      • genidoi a day ago

        The problem with using random real world situations as analogies for niches within Software Engineering is that they're not only (almost) ways wrong, but always misrepresentative of the situation in it's entirety

        • redserk a day ago

          Our entire profession is “how can we make thing difficult enough to not be used incorrectly”

          That applies from user experience: “how do I get user to click button”, to security “how do I lock things down enough to prevent most attacks I can think of”, to hardware design: “how do I ensure the chipset won’t melt down under really stupid software conditions”

          Starting with the low hanging fruit isn’t always the worst option. Sometimes it’s enough to dissuade people to give up.

    • WmWsjA6B29B4nfk a day ago

      It’s funny they are talking about low hundreds of emails. This is what a single properly instructed human can create with any provider in a few hours, no bots needed.

      • bobbiechen a day ago

        Agreed, I thought it was going to be something automated, but 250 accounts in 7 hours seems pretty manual. That does make it harder to stop.

        * 2025-06-09 first user account created, verified, 2FA set up, API Token provisioned

        * 2025-06-11 46 more user accounts created over the course of 3 hours

        * 2025-06-24 207 more user accounts created over the course of 4 hours

        I do run https://bademails.org , powered by the same disposable-email-domains project, and I'll be the first to say that it only cuts out the laziest of attempts. Anyone even slightly serious has cheap alternatives (25 to 100+ accounts for $1 on popular email hosts).

        • ajsnigrutin a day ago

          Yep, and if a human is doing that, it's easy to switch over to a different email provider, until that gets banned too, then another, until you can't do anything without a gmail address anymore.

      • reconnecting a day ago

        'tirreno guy' here.

        You can use open-source security analytics (1) to detect fraudulent accounts instead of blocking domain names. Blocking domains only shows your system is fragile and will likely just shift the attackers to use other domains.

        Feel free to contact us if you need assistance with setup.

        (1) https://github.com/tirrenotechnologies/tirreno

        • lucb1e a day ago

          Blocking providers makes sense since they can talk to the human that is doing the abuse. It's their customer after all

          Like with IP ranges that send a lot of spam/abuse, it's the provider's space in the end. If the sender has no identification (e.g. User-Agent string is common for http bots) and the IP space owner doesn't take reasonable steps, the consequence is (imo) not to guess who may be human and who may be a bot, but to block the IP address(es) that the abuse is coming from. I remember our household being blocked once when I, as a teenager, bothered a domain squatter who was trying to sell a normal domain for an extortionary price. Doing a load of lookups on their system, I couldn't have brought it down from an ADSL line but apparently it was an unusual enough traffic spike to get their attention, as was my goal, and I promptly learned from the consequences. We got unblocked some hours after my parent emailed the ISP saying it wouldn't happen again (and it hasn't)

          You don't have to look very far on HN to see the constant misclassifications of people as bots now that all the blocking has gotten seven times more aggressive in an attempt to gatekeep content and, in some cases, protect from poorly written bots that are taking it out on your website for some reason (I haven't had the latter category visit my website yet, but iirc curl/Daniel mentioned a huge outbound traffic volume to one scraper)

          • reconnecting a day ago

            I like the part about leaving the neighborhood blocked from internet access. Did neighbours find out that it was because of you?

            However, email accounts could be stolen, and this makes the email provider a victim as well.

            This particular case sounds very simple, and I'm quite confident that if we dig further, it's highly possible that all accounts use some pattern that would be easy to identify and block without hurting legitimate users.

            • lucb1e a day ago

              Neighbors? No, household; the ISP can't see into the house's network which MAC address did it, so they blocked the subscriber line that served my parents' household back when I lived with them (it was partially blocked actually: you could still visit info pages about the block on the ISP website and contact them by email)

              Edited to add:

              > this makes the email provider a victim as well

              Sure, but they have to deal with a hijacked account anyway, better to tackle it at the source. I'm also not saying to block the whole provider right away, at least not if you can weather the storm for a business day while you await a proper response from their side, just to use blocks when there is nobody steering the ship on their end

          • PokemonNoGo a day ago

            Odd installation steps.

            • theamk a day ago

              Totally normal for PHP software, and that's a primary reason of why PHP apps have such a bad security reputation. Note:

              - The application code itself and system configs are modifiable by the web handler itself. This is needed to allow web-based "setup.php" to work, but also means that any sort of RCE is immediately "fatal" - no need for kernel/sandbox exploit, if you can get PHP to execute remote code you can backdoor existing files as much as you want.

              - The "logs", "tmp", "config" etc.. directories are co-located with code directory. This allows easy install via unzip, but means that the code directory must be kept accessible while operation. It's not easy to lock it down if you want to prevent possible backdoors from previous options.

              Those install methods have been embraced by PHP community and make exploits so much easier. That's why you always hear about "php backdoors" and not about "go backdoors" or "django backdoors" - with other languages, you version-upgrade (possibly automatically) and things work and exploits disappear. With PHP, you version upgrade .. by extracting the new zip over the same location. If you were hacked, this basically keeps all the hacks in place.

              Kinda weird to see this from some self-claimed "security professionals" though, I thought they'd know better :)

              • PokemonNoGo a day ago

                I kinda understood I was missing "something" when I commented but I haven't used any PHP for over a decade and honestly it looked very well you said the rest... Thanks for the clarification. Very unfamiliar with modern PHP.

                • lucb1e a day ago

                  What did you think you had missed? I'm not understanding

                  > but I haven't used any PHP for over a decade

                  This isn't modern PHP, this is the traditional installation method that I used also a decade ago. The only thing that could be older about it is to have a web-cron instead of a proper system cron line. Modern PHP dependency installation is to basically curl|bash something on the host system (composer iirc) rather than load in the code under the web server's user and running the install from there, as this repository suggests. Not that the parent comment is wrong about the risks that still exist in being able to dynamically pull third-party code this way and hosting secrets under the webroot

                  • reconnecting a day ago

                    Correct, this isn't modern PHP. We aimed to keep overall code dependencies around ~10, and with modern frameworks this number would be multiplied heavily.

                • reconnecting a day ago

                  Fair critique on traditional PHP deployment.

                  However tirreno shouldn't be public-facing anyway. Production apps forward events via API on local network, security teams access dashboard over VPN.

                  Perhaps we will add this recommendation to the documentation to avoid any confusion. Thanks for the clarification.

                • pests a day ago

                  Id say it’s big standard for php apps and have been for awhile. Wordpress has a similar install flow. Docker images are provided tho.

                  • reconnecting a day ago

                    Yes, Matomo/Piwik, WordPress, and ProcessWire have more or less the same installation steps, but maybe we missed something along the way.

                  • reconnecting a day ago

                    Can you elaborate, please?

                    • snickerdoodle12 a day ago

                      The instructions aren't all that unusual for PHP software, especially those that target shared hosting, but are unusual compared to most other software.

                      > Download a zip file and extract it "where you want it installed on your web server"

                      The requirements mention apache with mod_rewrite enabled, so "your web server" is a bit vague. It wouldn't work with e.g. `python -m http.server 8000`. Also, most software comes bundled with its own web server nowadays but I know this is just how PHP is.

                      > Navigate to http://your-domain.example/install/index.php in a browser to launch the installation process.

                      Huh, so anyone who can access my web server can access the installation script? Why isn't this a command line script, a config file, or at least something bound to localhost?

                      > After the successful installation, delete the install/ directory and its contents.

                      Couldn't this have been automated? Am I subject to security issues if I don't do this? I don't have to manually delete anything when installing any other software.

                      • kstrauser a day ago

                        I'll side with you here. This gives attackers a huge window of time in which to compromise your service and configure it the way they want it configured.

                        In my recent experience, you have about 3 seconds to lock down and secure a new web service: https://honeypot.net/2024/05/16/i-am-not.html

                        • lucb1e a day ago

                          Wut? That can't have been a chance visit from a crawler unless maybe you linked it within those 3 seconds of creating the subdomain and the crawler visited the page it was linked from in that same second, or you/someone linked to it (in preparation) before it existed and bots were already constantly trying

                          Where did you "create" this subdomain, do you mean the vhost in the webserver configuration or making an A record in the DNS configuration at e.g. your registrar? Because it seems to me that either:

                          - Your computer's DNS queries are being logged and any unknown domains immediately get crawled, be it with malicious or white-hat intent, or

                          - Whatever method you created that subdomain by is being logged (by whoever owns it, or by them e.g. having AXFR enabled accidentally for example) and immediately got crawled with whichever intent

                          I can re-do the test on my side if you want to figure out what part of your process is leaky, assuming you can reproduce it in the first place (to within a few standard deviations of those three seconds at least; like if the next time is 40 seconds I'll call it 'same' but if it's 4 days then the 3 seconds were a lottery ticket -- not that I'd bet on those odds to deploy important software, but generally speaking about how aggressive-or-not the web is nowadays)

                          • kstrauser a day ago

                            Consensus from friends after I posted that is that attackers monitor the Let's Encrypt transparency logs and pounce on new entries the moment they're created. Here I was using Caddy, which by default uses LE to create a cert on any hosts you define.

                            I can definitely reproduce this. It shocked me so much that I tried a few times:

                            1. Create a new random hostname in DNS.

                            2. `tail -f` the webserver logs.

                            3. Define an entry for that hostname and reload the server (or do whatever your webserver requires to generate a Let's Encrypt certificate).

                            4. Start your stopwatch.

                            • lucb1e a day ago

                              Thanks! CT logs do explain it. So it's not actually the DNS entry or vhost, but the sharing of the new domain in a well-known place. That's making a lot more sense to me! I can see how that happens unwittingly though

                              We also use CT logs at work to discover subdomains that customers forgot about and may host vulnerable software at (if such broad checks are in the scope that the customer contracted us to check)

                              • kstrauser a day ago

                                Yep, that’s right. And I guarantee, like would bet my retirement savings on it, that someone today has counted on security through obscurity and not realized their new website was compromised a few seconds after they launched it for the first time ever. “I just registered example.com. No one’s ever even heard of it! I’ll just have to clean it up before announcing it”, not realizing they announced it when they turned the server on.

                                3 seconds.

                                • snickerdoodle12 21 hours ago

                                  I had a similar fun experience when I was generating UUID subdomains and was shocked to see traffic in the logs before ever sharing the URL. I've since switched to a wildcard certificate but regardless, you can't really trust the hostname to be secret because of SNI and all that.

                                  • kstrauser 8 hours ago

                                    That would’ve been quite the surprise! I was initially shocked enough when @ and www were getting hammered. A fully random hostname would’ve dazzled me for a bit.

                        • LeifCarrotson a day ago

                          > Huh, so anyone who can access my web server can access the installation script?

                          "Obviously", the server should not be accessible from the public Internet while you're still doing setup. I assume it should still behind a firewall and you're accessing it by VPN. Only after you're happy with all the configuration and have the security locked down tight would you publish it to the world. Right?

                          • undefined a day ago
                            [deleted]
                            • snickerdoodle12 a day ago

                              Obviously you should lock it down. I'm just going off these instructions and how they might be interpreted.

                            • reconnecting a day ago

                              This is not something specific to tirreno, as it's the usual installation process of any PHP application.

                              If there is an example of another approach, I will gladly take it into account.

                              • snickerdoodle12 a day ago

                                > as it's the usual installation process of any PHP application

                                Maybe a decade ago. Look into composer.

                            • kassner a day ago

                              composer install should be pretty much what one needs nowadays. Any installing scripts (although you really shouldn’t) can also be hooked into it.

                              • lucb1e a day ago

                                This requires running the install scripts with your shell permissions rather than with the webserver's permissions, if I'm not mistaken. I could see why one might prefer the other way, even if shared hosting is less common nowadays and shells more often an option

                            • lucb1e a day ago

                              Care to elaborate? They seem bog-standard to me

                            • reconnecting 12 hours ago

                              Sent a letter to security@.

                            • Scene_Cast2 a day ago

                              Oh hey, I was the person who reported this.

                              • mananaysiempre a day ago

                                I have to say that I don't understand the approach. On one hand, addresses @inbox.ru are administered by Mail.Ru, the largest Russian free email host (although I have the impression that its usage is declining), so quite a few (arguably unwise) real people might be using them (I’ve actually got one that I haven’t touched in a decade). On the other, the process for getting an address @inbox.ru is identical to getting one @mail.ru and IIRC a couple of other alternative domains, but only this specific one is getting banned.

                                • takipsizad a day ago

                                  pypi has blocked signups from outlook before. I don't think they care about the impact it creates

                                  • dewey a day ago

                                    I know a bunch of sites who do that and the problem is usually that register emails get flagged by outlook and never arrive, causing a lot of support burden. Easier to then nudge people into the direction of Gmail or other providers that don't have these issues.

                                  • jrockway a day ago

                                    I've been down that road before. Blocking Outlook and Protonmail filters out 0% of legitimate users and 75% of bots. You do what you can so you're not always 1 step behind.

                                • f311a a day ago

                                  Do you have special access or such thing can be tracked from outside somehow? Could be a fun project to detect this kind of abusive behavior automatically

                                  • miketheman a day ago

                                    Sadly the majority of this data is not externally visible.

                                  • miketheman a day ago

                                    Thank you!

                                  • joecool1029 a day ago

                                    That disposable-email-domain project is a good one. Over 10 years ago I did a dumb thing and pointed some of my domains MX's to Mailinator before I used them for real email with Fastmail and now the domains are flagged all over the place as disposable even though they haven't been used that way in ages.

                                    This project has an allowlist you can submit a PR to so it doesn't get sucked back in every time people submit outdated lists of free email provider domains.

                                    I've sent dozens of PR's to de-list my domains on various projects across Github and it's like fighting the sea, but the groups making opensource software to use these lists are at least very apologetic and merge the PR's quickly.

                                    However, the biggest ASSHOLES are Riot Games. I've reached out to and they will not ban new user registrations on my domains. I eventually just had to block all the new account registration emails for League of Legends I was getting in my catch-all. The maintainer of the tool people were using to make new accounts was very responsive and apologetic (quickly merged my PR) but it doesn't stop people who used the old versions of it from continuing.

                                    • klntsky a day ago

                                      Google accounts are $0.50 on hstock. It's impossible to stop spam

                                      • OldfieldFund 5 hours ago

                                        Yup, and Microsoft with access tokens are $0.05

                                      • Nickste a day ago

                                        Mike is doing an incredible job of finding ways to make it harder for attackers to abuse PyPI (see the PyPI quarantine project). At Safety (previously PyUp) we've been tracking a significant increase in malicious packages that compromise you as soon as you install them. We've extended our open-source CLI tool with a "Firewall" capability that aims to protect against some of these kinds of attacks (typosquatting, slopsquatting) while not requiring any changes to the tooling you use (e.g. pip, uv, poetry).

                                        You can check it out with: pip install safety && safety init

                                        • nzeid a day ago

                                          I don't understand how a mere account signup is the bar for publishing packages. Why not queue the first few publishes on new accounts for manual review?

                                          • zahlman a day ago

                                            PyPI's human resources are extremely strained. (The technical side also only exists thanks to Fastly's generosity.)

                                            • undefined a day ago
                                              [deleted]
                                            • akerl_ a day ago

                                              Who would do the manual review?

                                              • vips7L a day ago

                                                A staffer from the Python foundation? This is how maven central works. Someone physically verifies that you own the reverse domain of your package.

                                                • woodruffw a day ago

                                                  Murky security model for domain validation aside, how does that ensure the honesty of the uploaded package?

                                                  (So much of supply chain security is people combining these two things, when we want both as separate properties: I both want to know a package's identity, and I want to know that I should trust it. Knowing that I downloaded a package from `literallysatan.com` without that I should trust `literallysatan.com` isn't good enough!)

                                                  • akerl_ a day ago

                                                    That’s basically no validation at all. Python doesn’t even have that kind of namespacing to need to validate.

                                                    The kind of validation being discussed here would take way more than “a staffer”.

                                                    • nzeid a day ago

                                                      I mean... don't let perfect be the enemy of good?

                                                      I'm insisting that even the barest minimum of human/manual involvement solely on account signup would be a major security improvement.

                                                      It would be exhausting to have to audit your entire dependency tree like your life depended on it just to do the most mundane of things.

                                                      • akerl_ a day ago

                                                        This isn’t about perfect vs good.

                                                        The thing you’re suggesting is outright not possible given the staffing that the Python maintainers have.

                                                • Sohcahtoa82 a day ago

                                                  Because that would easily get DoS'd.

                                                  Any time you introduce humans manually reviewing things, the attackers win instantly by just spamming it with garbage.

                                                  • stavros a day ago

                                                    Probably because that would be too expensive for PyPI.

                                                  • ajross a day ago

                                                    The whole model is broken. The NPM/PyPI idea (vscode extensions got in similar trouble recently) of "we're just a host, anyone who wants to can publish software through us for anyone in the world to use with a single metaphorical click" is just asking for this kind of abuse.

                                                    There has to be a level of community validation for anything automatically installable. The rest of the world needs to have started out by pulling and building/installing it by hand and attesting to its usefulness, before a second level (e.g. Linux distro packagers) decide that it's good software worth supplying and supporting.

                                                    Otherwise, at best the registries end up playing whack-a-mole with trickery like this. At worst we all end up pulling zero days.

                                                    • woodruffw a day ago

                                                      I don't think the model is broken; a latent assumption within the model has always been that you vet your packages before installing them.

                                                      The bigger problem is that people want to have their cake and eat it too: they want someone else to do the vetting for them, and to receive that added value for no additional cost. But that was never offered in the first place; people have just sort of assumed it as open source indices became bigger and more important.

                                                      • andrewaylett 4 hours ago

                                                        There's a whole industry full of people who will charge you for them to do at least a smidge of vetting. And it's not entirely snake oil: finding and publishing vulnerabilities is good advertising.

                                                        I might find the likes of Snyk somewhat annoying when I'm required to have them audit projects at work (they aren't as good as Renovate or even Dependabot at raising version bumps, and most of the alerts are false positives for our environment) but I mostly appreciate that they exist.

                                                        • codedokode a day ago

                                                          That's actually what Linux distributions provide free of charge: a list of verified packages. However, a sustainable solution would be a commercial vendor (like Kaspersky for example) providing a safe feed of packages on a paid basis.

                                                          • woodruffw a day ago

                                                            > That's actually what Linux distributions provide free of charge: a list of verified packages

                                                            That's true in the sense that distros tend to provide digital signatures. But we're talking asserting the actual security of packages, not just that they were quickly looked at by a trusted party.

                                                            And again, that's not somehow blameworthy: they're providing significant value even without asserting the security of packages.

                                                            (And don't take my word for this: take it from the distro maintainers in this very thread, as well as elsewhere[1].)

                                                            [1]: https://www.reddit.com/r/linux4noobs/comments/1c6i3je/are_al...

                                                          • ajross a day ago

                                                            > a latent assumption within the model has always been that you vet your packages before installing them

                                                            That is precisely the broken part. There are thousands of packages in my local python venv. No, I didn't "vet" them, are you serious? And I'm a reasonably expert consumer of the form!

                                                            • woodruffw a day ago

                                                              On re-read, I think we're in agreement -- what you're saying is "broken" is me saying "people assuming things they shouldn't have." But that's arguably not a reasonable assumption on my part either, given how extremely easy we've made it to pull arbitrary code from the Internet.

                                                              • jowea a day ago

                                                                Just have faith in Linus' Law.

                                                            • jowea a day ago

                                                              And who is going to do all this vetting and with what budget?

                                                              • perching_aix a day ago

                                                                Could force package publishers to review some number of other random published packages every so often. (Not a serious pitch.) Wouldn't create any ongoing extra cost (for them) I believe?

                                                                • akerl_ a day ago

                                                                  Do you have a serious pitch?

                                                                  • perching_aix a day ago

                                                                    Not really. The people who have an actual direct stake in this can go make that happen, I'm sure they're much better positioned to do so anyhow. For me, it's a fun thing to ponder, but that's all.

                                                                    • akerl_ a day ago

                                                                      It looks like they are deciding how to approach this. The article you’re commenting on is about how they identified malicious behavior and then blocked that behavior.

                                                                      It seems odd to pitch suggestions for other things they ought to do but then couch it with “well I’m not being serious” in a way that deflects all actual discussion of the logistics of your suggestion.

                                                                      • perching_aix a day ago

                                                                        Yeah, so I've read. Good for them, I suppose.

                                                                        > in a way that deflects all actual discussion of the logistics of your suggestion

                                                                        You seem to be mistaken there: I very much welcome a discussion on it. Keyword being "discussion". Just let's not expect an outcome anything more serious than "wow I sure came up with something pretty silly / vaguely interesting". Or put forward framings like "I'm telling them what to do or what not to do".

                                                                • em-bee a day ago

                                                                  not reviewing submissions is a big problem. i know i can trust linux distributions because package submissions are being reviewed. and especially becoming a submitter is an involved process.

                                                                  if anyone can just sign up then how can i trust that? being maintained by the PSF they should be able to come up with the funding to support a proper process with enough manpower to review submissions. seems rubygems suffers from the same problem, and the issues with npm are also well known.

                                                                  this is one of those examples where initially these services were created with the assumption that submitters can be trusted, and developers/maintainers work without financial support. linux distributions managed to build a reliable review process, so i hope these repositories will eventually be able to as well.

                                                                  • woodruffw a day ago

                                                                    > not reviewing submissions is a big problem. i know i can trust linux distributions because package submissions are being reviewed. and especially becoming a submitter is an involved process.

                                                                    By whom? I've had a decent number of projects of mine included in Linux distributions, and I don't think the majority of my code was actually reviewed for malware. There's a trust relationship there too, it's just less legible than PyPI's very explicit one.

                                                                    (And I'm not assigning blame for that: distros have similar overhead problems as open source package indices do. I think they're just less visible, and people assume lower visibility means better security for some reason.)

                                                                    • em-bee a day ago

                                                                      which distributions? and did you submit the packages yourself or did someone else from the distribution do the work?

                                                                      yes, there is a trust relationship, but from what i have seen about the submission process in debian, you can't just sign up and start uploading packages. a submitter receives mentoring and their initial packages are reviewed until it can be established that the person learned how to do things and can be trusted to handle packages on their own. they get GPG keys to sign the packages, and those keys are signed by other debian members. possibly even an in person meeting is required if the person is not already known to their mentors somehow. every new package is vetted too, and only updates are trusted to the submitter on their own once they completed the mentoring process. fedora and ubuntu should be similar. i don't know about others. in the distribution where i contributed (foresight) we only packaged applications that were known and packaged in other distributions. sure, if an app developer went rogue, we might not have noticed, and maybe debian could suffer from the same fate but that process is still much more involved than just letting anyone register an account and upload their own packages without any oversight at all.

                                                                      • woodruffw a day ago

                                                                        > did someone else from the distribution do the work?

                                                                        Someone else.

                                                                        To be clear: I find the Debian maintainers trustworthy. But I don't think they're equipped to adequately review the existing volume of a packages to the degree that I would believe an assertion of security/non-maliciousness, much less the volume that would come with re-packaging all of PyPI.

                                                                        (I think the xz incident demonstrated this tidily: the backdoor wasn't caught by distro code review, but by a performance regression.)

                                                                        • jowea a day ago

                                                                          I've contributed some packages to NixOS, I didn't do code review and as far as I can tell nothing told me I had to. I assume that if I had said the code was hosted at mispeledwebsite.co.cc/foo in the derivation instead of github.com/foo/foo or done something obviously malicious like that the maintainers would have sanity checked and stopped me, but I don't think anyone does code review for a random apparently useful package. And if github.com/foo/foo is malicious code then it's going to go right through.

                                                                          And isn't the Debian mentoring and reviewing merely about checking if the package is properly packaged into the Debian format and properly installs and includes dependencies etc?

                                                                          I don't think there is anything actually stopping some apparently safe code from ending up in Linux distros, except the vague sense of "given enough eyeballs, all bugs are shallow", i.e. that with everyone using the same package, someone is going to notice something, somehow.

                                                                          • codedokode a day ago

                                                                            Maybe Linux distributions should do it other way; they do not need to provide every software package via trusted repositories. Instead, they should provide a small set of trusted packages to create a standard execution environment, and run everything else in a sandbox. This way one can install any third-party software safely and maintainers have less work to do. And software developers do not need to create a package for every of hundreds distributions.

                                                                            • em-bee 10 hours ago

                                                                              that is actually happening. fedora has copr where anyone can create a personal repository to upload their packages. suse has something similar which surprisingly is able to support all multiple major distributions, including debian. those are effectively repo hosting services that anyone could provide.

                                                                              the difference to pypi/npm/rubygems etc is that nobody would upload a package to a personal repo with a dozen dependencies from other personal repos. when i install a copr package i can be sure that all dependencies are either from the trusted fedora repo or from that same personal repo.

                                                                              that means i only need to trust that one developer alongside the official distribution. unlike npm or pypi where i have to trust that each submitter vetted their own dependencies, or vet them myself, which is also unrealistic.

                                                                              • codedokode 6 hours ago

                                                                                No, they are not. Neither Fedora nor Debian have any sandboxing and if you add a third-party repository, it gets root access to your system and can run any scripts when installing or updating software.

                                                                                Also what I meant is a "standard execution environment", so that the developer doesn't need to make a separate version for each Linux distribution, and doesn't have to make repositories.

                                                                                • em-bee an hour ago

                                                                                  sorry, i misread that. i thought you were just talking about the trust and vetting issue. i glanced over "sandboxing". sandboxing apps is what android is doing and i think nixos and also flatpack, etc. and with flatpack, that approach is effectively already possible, and in a way, already in the works. but it's a different approach, one that i don't like at all, because it is way to heavy handed and makes interoperability between apps very difficult. it also doesn't solve the trust problem, because at the end of the day the sandboxed app still needs access to my data, so i still need to trust it.

                                                                                  however that is completely besides the point because we are really talking about improving trust with pypi and npm and the like. sandboxing here is simply not possible because these are mostly libraries to be used for development of larger apps.

                                                                                  the approach distributions are using now would be useful here.

                                                                          • ajross a day ago

                                                                            > I don't think the majority of my code was actually reviewed for malware.

                                                                            That's not the model though. Your packages weren't included ab initio, were they? They were included once a Debian packager or whoever decided they were worth including. And how did that happen? Because people were asking for it, already having consumed and contributed and "reviewed" it. Or if they didn't, an upstream dependency of theirs did.

                                                                            The point is that the process of a bunch of experts pulling stuff directly from github in source form and arguing over implementation details and dependency alternatives constitutes review. And quite frankly really good review relative to what you'd get if you asked a "security expert" to look at the code in isolation.

                                                                            It's not that it's impossible to pull one over on the global community of python developers in toto. But it's really fucking hard.

                                                                            • woodruffw a day ago

                                                                              > The point is that the process of a bunch of experts pulling stuff directly from github in source form and arguing over implementation details and dependency alternatives constitutes review. And quite frankly really good review relative to what you'd get if you asked a "security expert" to look at the code in isolation.

                                                                              The thing is, I don't think that's what's happening in 2025. I think that might have been what was happening 20 years ago, but I didn't experience any pushback over my (very large) dependency tree when my projects were integrated. Lots of distros took a look at it, walked the tree, rolled everything up, and called it a day. Nobody argued about dependency selection, staleness, relative importance, etc. Nobody has time for that.

                                                                              > It's not that it's impossible to pull one over on the global community of python developers in toto. But it's really fucking hard.

                                                                              I don't think this is true; at the periphery, ~nobody is looking at core dependencies. We can use frequency of "obvious" vulnerabilities in core packages as a proxy for how likely someone would discover an intentional deception: CVE-2024-47081 was in requests for at least a decade before anybody noticed it. Last time I checked, the introduction-to-discovery window for UAF vulnerabilities in Linux itself was still several years.

                                                                              (This is true even in the simplest non-code sense: I maintain a lot of things and have taken over a lot of things, and nobody notices as long as the releases keep coming! This is what the Jia Tan persona recognized.)

                                                                              • ajross 20 hours ago

                                                                                That's sort of a double standard, though. No, Debian et. al. aren't perfect and there are ways that serious bugs can and do make it through to production systems. But very, very few of them are malicious exploits. The xz-utils mess last year was a very notable, deliberate attack that took years of planning to get an almost undetectable exploit into a core Linux library.

                                                                                And. It. Failed. Debian caught it.

                                                                                So no. Not perfect. But pretty good, and I trust them and their track record. That's a very different environment than "Here guys, we'll send your code all over the world. But no Russian emails please. Thx."

                                                                        • ajross a day ago

                                                                          Right now: we are. And we're collectively paying too much for a crap product as it stands.

                                                                          Debian figured this out three decades ago. Maybe look to them for inspiration.

                                                                          • notatallshaw a day ago

                                                                            If you want to offer a PyPI competitor where your value is all packages are vetted or reviewed nothing stops you, the API that Python package installer tools to interact with PyPI is specified: https://packaging.python.org/en/latest/specifications/simple...

                                                                            There are a handful of commercial competitors in this space, but in my experience this ends up only being valuable for a small % of companies. Either a company is small enough and it wants to be agile and it doesn't have time for a third party to vet or review packages they want to use. Or a company is big enough that it builds it's own internal solution. And single users tend to get annoyed when something doesn't work and stop using it.

                                                                            • em-bee a day ago

                                                                              that's like suggesting someone complaining about security issues should fork libxml or openssl because the original developers don't have enough resources to maintain their work. the right answer is that as users of those packages we need to pool our resources and contribute to enable the developers to do a better job.

                                                                              for pypi that means raising funds that we can contribute to.

                                                                              so instead of arguing that the PSF doesn't have the resources, they should go and raise them. do some analysis on what it takes, and then start a call for help/contributions. to get started, all it takes is to recognize the problem and put fixing it on the agenda.

                                                                              • woodruffw a day ago

                                                                                > so instead of arguing that the PSF doesn't have the resources, they should go and raise them

                                                                                The PSF has raised resources for support; the person who wrote this post is working full-time to make PyPI better. But you can't staff your way out of this problem; PyPI would need ~dozens of full time reviewers to come anywhere close to a human-vetted view of the index. I don't think that's realistic.

                                                                              • ajross a day ago

                                                                                Right. That's the economic argument: hosting anonymously-submitted/unvetted/insecure/exploit-prone junkware is cheap. And so if you have a platform you're trying to push (like Python or Node[1]) you're strongly incentivized to root your users simply because if you don't your competitors will.

                                                                                But it's still broken.

                                                                                [1] Frankly even Rust has this disease with the way cargo is managed, though that remains far enough upstream of the danger zone to not be as much of a target. But the reckoning is coming there at some point.

                                                                              • zahlman a day ago

                                                                                > And we're collectively paying too much for a crap product as it stands.

                                                                                Last I checked, we pay $0 beyond our normal cost for bandwidth, and their end of the bandwidth is also subsidized.

                                                                            • extraduder_ire a day ago

                                                                              Has anyone tried calculating pagerank numbers for such packages, since so many of them depend on other packages, and most of these repositories report install counts?

                                                                              This is easy to game, and in some ways has been pre-gamed. So it wouldn't really be a measure of validation, but would be interesting to see.

                                                                              • undefined a day ago
                                                                                [deleted]
                                                                              • undefined a day ago
                                                                                [deleted]
                                                                                • ynbl_ a day ago

                                                                                  and mail.ru is not even a real internet service:

                                                                                  > Please enter the phone number you'll use to sign in to Mail instead of a password. This is more secure.

                                                                                  • codedokode a day ago

                                                                                    Mail.ru is more than real, there is just a trend to move away from passwords because they are not secure when used by ordinary people and can be stolen with a keylogger. So Russian services move to SMS codes, mobile apps or government services for authorization. Also it is a legal requirement to implement a system that doesn't allow anonymous registration, not linked to real identity.

                                                                                  • Tiberium a day ago

                                                                                    I'm really not following -- why does the ban specifically focus on a single domain instead of attempting to solve the core issue? Do the maintainers not know that accounts for any big email provider (gmail, outlook, you name it) can be bought or created for very, very cheap. Which is obviously what the attackers will now do after this ban.

                                                                                    The blog post references [0] which makes it seem like the maintainers do, in fact, just ban any email providers that attackers use instead of trying to solve the issue.

                                                                                    [0] https://blog.pypi.org/posts/2024-06-16-prohibiting-msn-email...

                                                                                    • snickerdoodle12 a day ago

                                                                                      What is the core issue and how would you solve it?

                                                                                      • undefined a day ago
                                                                                        [deleted]
                                                                                    • lysace a day ago

                                                                                      I don't understand why this is newsworthy. Spam never ends.

                                                                                      • perching_aix a day ago

                                                                                        Because of:

                                                                                        > See a previous post for a previous case of prohibiting a popular email domain provider.

                                                                                        • lysace a day ago

                                                                                          That was outlook.com/hotmail.com. So? Incompetent/malicious/disengaged mail providers come in all shapes and forms.

                                                                                          • perching_aix a day ago

                                                                                            The implication is that this other email host also being one of the popular ones means there'll be a more widespread user impact than when they block smaller providers. So just like with Outlook, they put out this statement on why they're doing this.

                                                                                            • lysace a day ago

                                                                                              Ah, I see your point.

                                                                                              Although: I don't think the kind of developers that use low quality email providers like that follow HN.

                                                                                              Edit: Remember those 7+ hours back in 1999 when all Microsoft Hotmail accounts were wide open for perusal?

                                                                                              https://time.com/archive/6922796/how-bad-was-the-hotmail-dis...

                                                                                              > Yesterday a Swedish newspaper called Expressen published the programmer’s work, a simple utility designed to save time by allowing Hotmail users to circumvent that pesky password verification process when logging into their accounts. The result? As many as 50 million Hotmail accounts were made fully accessible to the public. Now that the damage has been done, what have we learned?

                                                                                              > It wasn’t until the lines of code appeared in Expressen that people realized how vulnerable Hotmail really was. The utility allowed anybody who wanted to to create a Web page that would allow them log into any Hotmail account. Once the word was out, dozens of pages such as this one were created to take advantage of the security hole. Unfortunate programmers at Microsoft, which owns Hotmail, were rousted out of bed at 2 AM Pacific time to address the problem. By 9 AM Hotmail was offline.

                                                                                              https://www.theregister.com/1999/08/30/massive_security_brea...

                                                                                              https://www.theguardian.com/world/1999/aug/31/neilmcintosh.r...

                                                                                              https://www.salon.com/1999/09/02/hotmail_hack/

                                                                                        • reconnecting a day ago

                                                                                          Online fraud will never end, but it is possible to make it much more expensive and shift attackers to other victims.

                                                                                        • sysrestartusr a day ago

                                                                                          [flagged]