*65536 ports
Port 0 is a port some operating systems can and do host services on accessible over the Internet.
Also - if there's any MariaDB devs reading this - your default setting making the database listen on port 0 to disable Internet access does not, in fact, disable Internet access of the DB for quite a few thousand systems.
MariaDB explicitly checks if the port is non-zero before listening on a TCP socket:
https://github.com/MariaDB/server/blob/ae998c22b2ce4f1023a6c...
> if (mysqld_port)
> activate_tcp_port(mysqld_port, &listen_sockets, false);
if (mysqld_port) means "if mysqld_port is different from 0"
This seems to be at least in MariaDB 5.5 (year 2012)
You can even use it under Linux if you wish btw, you just can't bind to it, but you can have your firewall redirect port 0 to something else.
You can bind to it on some versions of Linux. I've scanned a bunch of Linux systems that host stuff on port 0.
Your observation doesn't contradict the use of firewall rules to accomplish this.
It's not some ufw rule that normally prevents hosting a service on port 0.
That's not what was said. They said that a firewall rule can redirect traffic coming in on port 0 to a running service even when a service cannot bind directly to port 0.
Binding with port 0 as argument for AF_INET binds a random available port, not port 0. This is documented behavior of Linux and likely every other OS implementing a BSD-style socket interface.
Also note that ufw is just a tiny, non-standard wrapper for the much more powerful nftables/iptables interfaces
It feels inevitable that computer security will continue evolving towards "active defense" typified by approaches like the above. Look at how complex and many-layered your immune system is, and consider that eventually your computer and/or network will resemble that as well.
IMO this is still a passive type of security through obfuscation. Active defence would be more like returning zip bombs to known intruders in order to crash the process.
Or a tar pit: https://github.com/skeeto/endlessh
Endlessh seems to be abandonware. linuxserver.io used to maintain a docker image but deprecated it (https://github.com/linuxserver/docker-endlessh/pull/16) after endlessh didn’t get any new updates in over 3 years. I’ve started using endlessh-go instead https://github.com/shizunge/endlessh-go
It appears it can be configured to actively return attacks:
> Portspoof can be used as an 'Exploitation Framework Frontend', that turns your system into responsive and aggressive machine. In practice this usually means exploiting your attackers' tools and exploits
I can't seem to figure out how this would work or what this mean. Most of the links to the documentation seem to be missing.
I'd actually be curious to know if this seemingly ~10 year old software still works. Also how much bandwidth it uses, CPU/RAM etc.
There's tons of client software that can be exploited if you send a dangerous payload to it. Think of an exploitable version of Curl that will fail if it receives a bad http header.
I would guess that it fingerprints the scanning software (e.g. metasploit), then feeds a payload back to it that has a known exploit in the scanning script.
IT is growing up gradually. It's only had a few decades to worry about security and I've seen most of them.
One day, IT will become time served but not today.
I'm not sure I like this analogy since the immune system regularly malfunctions and damages the host (allergies, cancer, etc) but then again, it does draw some concerning parallels,.
The immune system is an incredible marvel of engineering, protecting you against an infinite number of attack vectors without any online database update after initial installation. It develops countermeasures on the fly, deploys layers of Defense that coordinate intelligently as a swarm, and keeps track of which molecules belong to „you“ while those molecules keep replacing themselves with those obtained from the outside. It constantly ingests signals from billions of sensors over your body which function as first-responder Defense measures as well as repair kits AND evidence capsules for the cavalry that rolls in later. And that’s just a sliver of all the ingenious ways the immune system works.
I wholeheartedly recommend reading „Immune“ by Phillip Dettmer: https://www.amazon.de/dp/0593241312
I own a copy. It's a great book.
I also suffer from severe asthma and allergies, both of which are, by all accounts, not normal or wanted responses of the immune system, not to mention of the low-end of the horror spectrum when it comes to immune function that is terrifyingly harmful to the host.
It is an exceptionally complex and wonderous thing, but where we diverge is in thinking of it as a "marvel of engineering" or any other prose that implies some sort of guiding hand. It is a far from perfect system, and gets things wrong often enough that we have a global industry creating products to control it.
> […] thinking of it as a "marvel of engineering" or any other prose that implies some sort of guiding hand.
Heh. It's hard to talk about the way things have been shaped by evolution without implying an actor, because our vocabulary is so very shaped by our subjective experience. I, personally, am reasonably certain that there is no creator in whatever sense. Yet, I'm still awestruck at the ingenuity life on our planet has shown, and the immune system is a never-ending source of wonder to me.
And while it surely isn't perfect, if we were to look at the raw numbers of incidents versus the number of adversary action against your body—that would be pretty darn near perfect.
then vaccination is online database update through forced learning
AI should enable high quality and deep honeypots. It's perfect for the current llm capabilities... Just look good enough.
Our skin doesn't pretend to be a mouth I don't think.
But within nature there are examples of this kind of mimicry - i.e butterfly wings pretending to be predators eyes.
Having all ports open is not a butterfly pretending to be a predator. It's a butterfly pretending to be everything, including other prey that would attract other predators.
I had made a similar attempt at stopping email crawler spambots by creating a web page that produces infinite random email addresses.
http://web.archive.org/web/20020610054821/http://www.sourtim...
Am I missing something or is it truly infinite?
“Next page” link goes to the same page with entirely different set of addresses. So, it’s practically infinite for a crawler.
Be aware that if you run something like this, you will get dozens of bug bounty requests by people who scanned your machine and found "known vulnerable version of X" running.
Wouldn't that inevitably end up with your server being more closely inspected (or at least more heavily trafficked) by hackers/bots?
I doubt that most script kiddies are filtering out potential honeypots/things like this from their tools.
I guess it'll be obvious that a server is running portspoof after you find that 3 random services that nobody uses anymore seem to be running, but now that you know the host is up, which ports do you tinker with?
If you assume that scanning/attacking each port on each server takes about the same effort, you are better off finding a machine where the scan/attack has a higher chance of being successful, even if you can tell which ports are spoofed and not worth attacking.
Maybe you can run portspoof locally on 127.0.0.35 and compare which responses seem different (data, timings) from what you get back, but the space is suddenly 5000x bigger than the handful of ports that normally seem to be open and ports on other servers may seem more likely to yield success.
only answer positively the first ones. use nmap os/service id database to emulate correct response per port.
I agree, returning legit banners on common ports is likely to get you looked at more rather than less, since most tools are not accounting for situations where every single port is open, indicating false positives. This is a common scenario on penetration tests, and while it does end up wasting time, I'd rather not give attackers any more reason to be looking at my infra. I would prefer port knocking, which is kinda of the polar opposite approach to this.
Combine the two.
By default, return nonsense on all ports. But once a certain access sequence has been detected from a source IP, redirect traffic to a specific port from just that IP to your real service.
So port knocking, but with also returning junk during the knocking process?
Yea, thinking about it for a minute I would expect limited threat models this tool would help with. I think for broad attacks, this would only be somewhat effective if deployed on tens of millions of hosts so it becomes impractical because the adversary is just finding and interacting with the honeypots.
If you are specifically getting targeted, there might be a slight delay by having the adversary try and exploit the honeypot ports, but if you're running a vulnerable service you still get exploited.
Also if you're a vendor, when prospective customers security teams scan you, you'll have some very annoying security questionnaires to answer.
Not a network security expert, but the level of traffic necessary to figure out whats real would probably trip other detection mechanisms in the process.
If you're worried about mass internet scans, I can see the downsides. But if you're worried about a targeted attacker scanning just your organization’s IP ranges, this seems like it would hinder them quite a bit.
You also now have to worry about vulnerabilities in portspoof.
Put it in an otherwise airgapped dmz.
or not respond at all. On linux you can disable the rst behavior using
sysctl -w net.ipv4.tcp_reset_reject=0
> it binds to just ONE tcp port per a running instance !
How does that work? Do you need to run 65535 instances to cover all ports?
iptables rule redirects all closed ports on the machine to the one portspoof listens to: https://github.com/drk1wi/portspoof/blob/c3f3c34531c59df229e...
Then it calls getsockopt to find out what the original port was: https://github.com/drk1wi/portspoof/blob/c3f3c34531c59df229e...
That's actually pretty darn neat. Thanks for the references, too.
why? A single process can bind to multiple ports. I don't know what's the hard limit actually, or how much memory it would use. Probably a port redirect would just be simpler.
But this thing says it binds to ONE port. iptables is doing the port redirect for unbound ports.
nat-redirection
Yea, that statement confused me as well.
Nice, I'm glad the word "honeypot" is never used, once I inherited a "true" honeypot and when I went to check it, it had like 30 ports opened, my reaction was literally "what the fuck is this crap" said out loud.
Isn't that precisely what a honeypot is meant to do though? Having ports open so that script kiddies get excited they get access to something, but the something just isn't anything? Having a honeypot that is locked down doesn't really seem like a honeypot at that point
I do red team, if I see a server with 20+ ports I'll immediately assume it's a honeypot and will stop scanning it. If you are part of a blue team you WANT them to waste time, not instantly know it's a honeypot, that's what I meant for specifically this software.
So what you're saying is that I should open 30 ports on my critical servers so that they are ignored as honeypots by the attackers? Noted.
I mean sure, absolutely go for it, don't forget to tell me the IP so I can add it to my ignore list.
If you’re a red team member that needs to be told this type of information, you’re not very good at it. You just said this is something you do on your own, so why would you need to be told about it? Either you do it or you don’t.
youdidntgetthejoke.jpg
Sure, here you are! 127.0.0.1 Thank you for your thoughtfulness.
I think what's neat is that this tool can reply to so many protocols/ports, you can enable whichever subset you want.
You could also easily tweak it to have the ports spread on a few different IP addresses instead of a single one. That would make them much less obvious.
I remember the first box I ever built that went online before I had any clue about what I was doing. It had to have looked like a honeypot. Then someone knowledgeable helped me and started shutting things down. They also mentioned Debian being designed as essentially opt-in instead of opt-out to precisely avoid this issue
Exactly, most people confuse the fact that a server being a honeypot doesn't mean "HEY SCAN ME HEY I'M A HONEYPOT SCAN ME SO I CAN GET FREE INFO", you need a balance between exposing yourself and being authentic so the person on the other side doesn't tap Ctrl-C at the sight of a lot of ports opened. I'm mean sure, you'll get a ton of info from bots, but if you're truly using a honeypot for R&D you want the true hackers to knock on your door.
The usual trick is to have many pots, each of which is mostly but not entirely locked down.
Perhaps one of us misunderstands the term honeypot, it could be me, but IMO this seems perfectly usable to create a honeypot system on your network.
A honeypot is used to attract and detect an attacker, usually logging their actions and patterns for analysis or blocking. This tool could use more logging beyond just iptables, and sure it’s not _by itself_ a honeypot, but the idea isn’t that far off.
All that aside, the GitHub page suggests this “enhances OS security” which I don’t buy one bit. Sure it provides some obfuscation against automated service scanners, but if you have a MySQL server listening on 3306, and an attacker connects to 3306, they’re still talking to MySQL. Doesn’t matter if all the other 65534 ports are serving garbage responses.
All the responses look legitimate though, so even if someone does hit that MySQL, they'll be hard pressed determining it's not part of the noise of the other 65.5k legitimate-seeming responses. They'll just be wasting resources trying to get beyond such a broad surface to gain any depth. And if they already know to target MySQL (or any other particular service), it's all moot in any case, but also they wouldn't be doing a spectrum scan.
I do red team, if I see a server with 20+ ports I'll immediately assume it's a honeypot and will stop scanning it. If you are part of a blue team you WANT them to waste time, not instantly know it's a honeypot, that's what I meant for specifically this software.
But how do you know it's real? You might be running Postgres on 5432 and them connecting to 3306 might respond with a lookalike mysql.
I would imagine the amount of time someone spends “investigating” a port like 3306 is the amount of time it takes for the existing automated software to run a check to see if the mysql server is vulnerable. So unless the service on 3306 is able to spoof a vulnerable mysql server, they don’t care if it’s real or not. They just care if their tool reports a vulnerable service.
Would this also be potentially a DoS amplifier? If you sent it the right spoof packets, would it return a lot of packets to the apparent origin?
For TCP services, it won't send a large packet until the "client" provides a correct ACK packet to complete the three way handshake.
This would indeed be pants-on-head for UDP.
Amplification attacks are mainly a concern with UDP because UDP does not have a return routability check, while TCP does.
This sent me down a rabbit hole remembering the DDoS attacks the skids were coming up with in the 90s. The famous Pepsi & Smurf attacks that would spoof a connection from one server running CHARGEN [1] and send it to another running ECHO [2] and it would just send an endless flood of characters to the victim. It might have been one of, if not, the first distributed denial of service attacks. It's wild to think people would leave all the ports open on their servers that would just spew endless characters and etc. Those were the days when everyone was so open and trusting of other users on the internet.
CORRECTION: This was actually name "Fraggle". [3] Smurf involved ICMP flooding.
I remember seeing these on EFnet IRC in the 90s. Since the code is so ancient, I thought I'd share it. I'm sure these would be useless in modern times, but they're an interesting bit of internet history. It also hilarious to look at the comments and see old IRC handles you recognize. Who remembers napster before he developed the p2p software that made him famous?
Pepsi.c https://cdn.preterhuman.net/texts/underground/hacking/exploi...
This site has loads of old historic exploits preserved one folder up.
Smurf.c https://gist.github.com/JasonPellerin/2eecbf1f7e49750d2249
[1] https://en.wikipedia.org/wiki/Character_Generator_Protocol?w...
I remember. I was hanging out in #ansi and #hav0k on EFNet at the time with nyt, soldier, v9, Napster, etc.
Fun times. I miss those days.
I do something similar on my website: https://bini.wales returns 200 for all endpoints and logs all attempts, so it makes for a decent have a honeypot against automated attacks (mostly it just catches people mass scanning for vulnerable WordPress plugins or leftover backdoors). Similarly, https://varun.ch/login emulates a WordPress site (with a twist)
You will get the WordPress scans regardless of what you return.
the natural evolution of such an approach is to also seemingly advertise a variety of security holes.. and maintain a blacklist silently that feeds actual production systems as a firewall, should said hacker reach that point
Or just don't
> it takes more than 8hours and 200MB of sent data in order to properly go through the reconessaince phase for your system ( nmap -sV -p - equivalent).
So, every automated portscan from a hacked machne will waste 200MB of my bandwidth?
Well, that is certainly one way to attack the problem!
To speed up a comprehensive port probe with service discovery, one could use a few different systems on different IPs and divide the work.
Bringing back fond memories of the happy 90s
In the mid-90s, there was a honeypot product called CyberCop Sting[1], which predated Secure Networks' Ballista[2]. CyberCop Sting could simulate TCP and UDP services across various implementations. If I recall correctly, it also allowed the configuration of TCP/IP stacks to mimic the behavior of different operating systems. These features were particularly innovative almost 30 years ago.
[1] https://theswissbay.ch/pdf/Gentoomen%20Library/Security/0321...
Fascinating. I was going to ask if there were similar projects. It seems like an obvious thing to do and I was mildly surprised that it never occurred to me and majorly surprised that this was the first time I'm hearing about the idea.
How is this better than configuring an iptables redirection?
iptables only provides one of the two approaches detailed, namely the "ack" portion. For the "fake a random real service on each port" portion you'd need something additional like this.
Cute as this is, most attackers aren't obsessed with you, but are looking at just one port, the one they have an exploit for.
And if you have that port open with a vulnerable service, they'll find and exploit it, irrespective of whether this tool is running.
If it was the case that the way all attackers worked was to have one ready exploit and only scan the port on which that exploit could work, then why are there good guys even seeing port scanning at all?
How does this compare to a tarpit?
Tarpit (networking) https://en.wikipedia.org/wiki/Tarpit_(networking)
/? inurl:awesome tarpit https://www.google.com/search?q=inurl%3Aawesome+tarpit+site%...
"Does "TARPIT" have any known vulnerabilities or downsides?" https://serverfault.com/questions/611063/does-tarpit-have-an...
https://gist.github.com/flaviovs/103a0dbf62c67ff371ff75fc62f... :
> However, if implemented incorrectly, TARPIT can also lead to resource exhaustion in your own server, specifically with the conntrack module. That's because conntrack is used by the kernel to keep track of network connections, and excessive use of conntrack entries can lead to system performance issues, [...]
> The script below uses packet marks to flag packets candidate for TARPITing. Together with the NOTRACK chain, this avoids the conntrack issue while keeping the TARPIT mechanism working.
The tarpit module used to be in tree.
xtables-addons/ xt_TARPIT.c: https://github.com/tinti/xtables-addons/blob/master/extensio...
Haven't looked into this too deeply but there is a difference between delaying a response (requests get stuck in the tarpit) vs providing a useless but valid response. This approach always provides a response, so it uses more resources than ignoring the request, but less resources than keeping the connection open. Once the response is sent the connection can be closed, which isn't quite how a tarpit behaves. The Linux kernel only needs to track open requests in memory so if connections are closed, they can be removed from the kernel and thus use no more resources than a standard service listening on a port.
There is a small risk in that the service replies to requests on the port, though, as replies get more complicated to mimic services, you run the risk of an attacked exploiting the system making the replies. Another way of putting it, this attempts to run a server that responds to incoming requests on every port, in a way that mimics what might run on each port. If so, it technically opens up an attack surface on every port because an attacker can feed it requests but the trade-off is that it runs in user mode and could be granted nil permissions or put on a honeypot machine that is disconnected from anything useful and heavily tripwired for unusual activity. And the approach of hardcoding a response to each port to make it appear open is itself a very simple activity, so the attack surface introduced is minimal while the utility of port scanning is greatly reduced. The more you fake out the scanning by behaving realistically to inputs, the greater the attack surface to exploit, though.
And port scanning can trigger false postives in network security scans which can then lead to having to explain why the servers are configured this way and that some ports that should always be closed due to vulnerability are open but not processing requests, so they can be ignored, etc.
The original Labrea Tarpit avoids DOS'ing it's own conntrack table somehow, too;
LaBrea.py: https://github.com/dhoelzer/ShowMeThePackets/blob/master/Sca...
La Brea Tar Pits and museum: https://en.wikipedia.org/wiki/La_Brea_Tar_Pits
The NERDctl readme says: https://github.com/containerd/nerdctl :
> Supports rootless mode, without slirp overhead (bypass4netns)
How does that work, though? (And unfortunately podman replaced slirp4netns with pasta from psst.)
rootless-containers/bypass4netns: https://github.com/rootless-containers/bypass4netns/ :
> [Experimental] Accelerates slirp4netns using SECCOMP_IOCTL_NOTIF_ADDFD. As fast as `--net=host`
Which is good, because --net=host with rootless containers is security inadvisable FWIU.
"bypass4netns: Accelerating TCP/IP Communications in Rootless Containers" (2023) https://arxiv.org/abs/2402.00365 :
> bypass4netns uses sockets allocated on the host. It switches sockets in containers to the host's sockets by intercepting syscalls and injecting the file descriptors using Seccomp. Our method with Seccomp can handle statically linked applications that previous works could not handle. Also, we propose high-performance rootless multi-node communication. We confirmed that rootless containers with bypass4netns achieve more than 30x faster throughput than rootless containers without it
RunCVM, Kata containers, GVisor all have a better host/guest boundary than rootful or rootless containers; which is probably better for honeypot research on a different subnet.
IIRC there are various utilities for monitoring and diffing VMs, for honeypot research.
There could be a list of expected syscalls. If the simulated workload can be exhaustively enumerated, the expected syscalls are known ahead of time and so anomaly detection should be easier.
"Oh, like Ghostbusters."
I tried something like that. It didn't work because the application added the socket to an epoll set before binding it, so before it could be replaced with a host socket. Replacing the file descriptor in the FD table doesn't replace it in epoll sets.
Interesting concept, am curious how this withstands community review and analysis.
Bit puzzled though, by the statement made immediately after stating that it is GPL2: For commercial, legitimate applications, please contact the author for the appropriate licensing arrangements.
Since the GPL2 doesn't permit restricting what others do with GPLd software, I don't think this statement is doing what the author hopes; they might want to consult a lawyer.
(IANAL, etc., but there is nothing in there to prevent me, e.g., from building a business out of this, charging gazillions, and keeping it all for myself, provided I make the source available to my customers.)
Provided you make the source for any derivative works available to your customers.
It’s not uncommon that in situations where that’s undesirable (e.g. a closed-source C library that statically links a GPL’d project) that the library owner pays a fee for a separate license allowing that closed-source distribution.
Also, this is sometimes done when it’s not strictly legally necessary, either for risk avoidance or as a way to support the project in corporate environments where “licensing fee” gets waved through but “donation” gets blocked.
I believe the this doesn't apply if you're using existing APIs or using GPL code as a library, otherwise many many corporate codebases would be forced to be open sourced.
GPL absolutely applies when using a library (unless a separate exception has been made). Of course, the LGPL is often used for libraries when this isn't desired by the author.
> or using GPL code as a library
No. The copyleft nature still applies to libraries. That's why the LGPL exists. And the exception in the license for gcc for programs compiled by gcc.
Only if you distribute the binary/source of the GPLd library. You may build a non-GPL program that dynamically links with a GPL library and freely distribute it. As long as your program does not contain copyrightable code, you do not have to comply with the licence requirements, because you do not need a licence at all to do that. Same applies for static linking if you only distribute the source and require your users compile the program themselves.
This is not limited to GPL, but applies to proprietary libraries as well. It's OK to require a proprietary library at runtime and you don't need a licence to do that. As long as you do not distribute some intellectual property, copyright law and its limitations are not applicable at all.
This sounds quite assertive, so compulsory "IANAL, this is just my interpretation".
This all sounds awful close to the whole CLISP/ReadLine debacle. Basically, CLISP (a Lisp implementation) originally linked to ReadLine, a GNU library under the GPL. Richard Stallman argued that the author of CLISP had to remove ReadLine or license CLISP under the GPL.
The author originally created his own non-GPL library with the same interface as ReadLine and distributed that, noting that the user could (at their own option) link CLISP with GNU ReadLine instead if they wanted that functionality. Stallman argued that wasn't sufficient.
In the end, CLISP ended up being relicensed to GPL. Note though that no judge ever looked at it, so things might have turned out differently if it had gone to court.
I love these progressively more descriptive details about the GPL/LGPL. It's like a manifestation of the Futurama "You are technically correct; the best kind of correct" meme
Note that if your program is very intertwined with the library, it might still be considered a derivative work.
The Linux kernel has opinions about this: symbols marked with EXPORT_SYMBOL are considered symbols that every operating system would have, so using them doesn't mean you are writing a derivative work. Symbols market with EXPORT_SYMBOL_GPL are considered implementation details so specific to Linux that if you use them, you can't say that your module isn't derivative of Linux.
You can buy your way out of the GPL if the authors are willing to relicense.
Stallman was actually an advocate of doing this.
Agreed, cf other comments below. My impression is that that is what this person hopes for and that they think that somehow the GPL prevents others from using this code commercially, which it manifestly does not. (Such use would be subject to the GPL, of course.)
I believe the author is saying they're willing to relicense the software for commercial integrations.
I believe you're right, that was my conclusion as well. I'm not sure that that will accomplish what they hoped.
To continue my original example, I could, in theory, take this code, ensure that it works with arbitrary independent pseudo-services, create my own such services, under a proprietary licence, and distribute the whole as an aggregate, which is permitted by the GPL.
The author likely seeks to provide commercial licensing for those interested in integrating their pseudo-services as libraries, which would require either that they be GPLd or that the original code be licensed in some other way.
I hope the author achieves the success they hope for without the licensing and legal hell they may have set themselves up for. It can be a great disappointment to have one's work turned into someone else's success by a someone or someones with more legal and licence cunning than one's self.
(Note: that ain't me, I've just seen that exact scenario playout more than a fair few times....)
Yes, people can do that. It's inconvenient and risky, so serious customer prospects will pay to avoid it. This is one of the more common open source commercialization strategies; one of the earlier examples is Sleepycat.
The original copyright holder can enforce what they like
Not quite: once you GPL something, while you retain copyright and can licence it in other ways, the GPL itself forbids you from restricting what others can do with it if they take it under the GPL; the one thing they cannot do is change its licence, but you cannot prevent them from selling it, e.g. The FSF are very, very clear on this.
You don't care, because whatever a GPL taker does, they're still bound by the viral copyleft, you're not, and you can sell that privilege to others.
Not if the original author stills holds the copyright, which is likely the case: the GPL does NOT remove your copyright, and in fact depends upon it.
I understand us to be talking about the options available to the original copyright holder, yes.
Could this not trivially be accomplished with a service listening on one port and 'iptables' rules?
Per the README
it binds to just ONE tcp port per a running instance !
Configure your firewall rules:
iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 1:65535 -j REDIRECT --to-ports 4444
> By using those two techniques together:
> your attackers will have a tough time while trying to identify your real services.
So... Security through obscurity?
> the only way to determine if a service is emulated is through a protocol probe (imagine probing protocols for 65k open ports!).
So... Security through obscurity?
> it takes more than 8hours and 200MB of sent data in order to properly go through the reconessaince phase for your system ( nmap -sV -p - equivalent).
So... Security through obscurity?
Idk... Maybe I am not versed enough in infosec but this also begs the question are you not attracting more interest if your system lights up green for an exposed Redis instance for an adversary to notice you and take a closer look for anything else vulnerable.
>So... Security through obscurity?
This is not a valid criticism on its own.
Security through obscurity is bad when obscurity is the only thing stopping an attacker. It's a meme because obscurity is not a substitute for stronger security mechanisms. That does not mean it cannot be an appropriate compliment to them, however.
If I wanted to hide a gold bar, sticking it in an open hole behind a painting on the wall wouldn't be particularly great security. As soon as a robber found the hole, the entirety of my security is compromised.
If I put it in a safe on the wall, it's much more secure. The robber has to drill through the lock to get the gold bar.
If I put it in a safe behind a painting on the wall, the robber has to discover that there's a safe there before they're able to attempt drilling through it. Bypassing the painting is trivial compared to bypassing the safe, but the painting reduces the chance of the actual safe being attacked (up until it doesn't!)
Security should be layered. Obscurity will generally be the weakest of those layers, but that doesn't mean that it has no value. As long as you're not using obscurity as a replacement for stronger mechanisms, there's nothing wrong with leveraging it as part of a larger overall security posture.
Accepted cryptography is also security through obscurity. The thing is that the amount of obscurity must be quantified. Cryptanalysis allows one to calculate these quantities of "obscurity." Then, a full study of effectiveness combines that with the costs associated with brute-forcing the bounds arrived at by the cryptanalysis.
Other parts of infosec are the same, but often with less well-quantified measures of effectiveness. E.g. memory hardening techniques like FORTIFY_SOURCE and MTE are effective in raising the difficulty of exploiting memory vulnerabilities, but under some conditions the vulnerabilities may still be exploitable.
Before using labels like "security through obscurity" one has to first answer: how much does the technique raise the cost for attackers? This is what articles about security systems (including this one) should focus on. In the end, hacking, like most things, comes down to economics.
How much do you think this tool raises the cost for attackers?
Most modern attacks succeed by remaining undetected and this directly counters that. When combined with “every IP address responds to ARP and ICMP” (as discussed recently), you can make it impossible for an attacker to scan your network without firing off a honeypot alarm that introduces increasing packet loss on the attacker’s host as scans continue, providing enough time for an oncall human to finish what they’re doing and get to a keyboard to deal with the intrusion.
The next level of value for this is to TLS-encrypt random traffic between ports and hosts on the network, generated and injected by the switch into each network port, so that sniffing traffic is not an effective discovery mechanism. After that, address and port randomization of servers using a time-linked randomization seed stored in an HSM, so that attackers have no way to pierce the onion skin if they lose control of the HSM-bearing host.
This is all the natural outgrowth of container approaches, but in labor terms is nightmarishly complicated if you aren’t willing to spend for it.
No, instead of dropping all the packets into a black hole, you could put the packets into a hey we just got a scan request pile and if the pile is bigger than some heuristic call the on-call guy it’s completely unnecessary to respond to him to have this functionality
The “cost” includes all resources, including time. Some classes of attackers will be significantly slowed down by this.
People who criticize 'security through obscurity' don't know how hard it is to reverse engineer shit.
Either that or they're researchers or adversaries playing a game. Because trying to figure out WTF is going on is hard, so any clues you can extract from your targets makes things easier.
Agree, a lot of people misunderstand the purpose of security through obscurity.. it's a layer on top of other layers of security designed to waste attacker time. If your attacker is a nation state that's probably not going to stop it, but it might stop a lot of other lesser threats who realize it's not worth the effort.
When you see someone probing every single port on the box, you know they’re either a bad actor, or a security tool. No legit user is going to keep hammering ports without a known service.
Bad actors you can either block or counter attack. Security tools should be registering their address with whatever internal tracker you’re using so they can be white listed.
> When you see someone probing every single port on the box, you know they’re either a bad actor
That is not what the tool is for though... It is a tool specifically made to hinder... IDK... Making any information out of an NMAP scan?
The objective here is to give script kiddies and other spray-and-pray attackers the finger.
How does it do that though?
You light up in a skid's Internet-wide scan for let's say Redis. They try and fail to dump anything from it so they proceed and run a vulnerability scanner on your host (skids gonna skid)... It proceeds to discover IDK... a trivial SQLi you coded like a dumbass...
It’s literally an arms race. You make it more expensive for attackers to progress. Yes, security through obscurity is bad on its own, but it’s not necessarily useless as an additional measure.
For a similar concept, look at the delay you get after entering a password wrong to a login prompt: That technically does not add any barrier whatsoever, but it does make it much harder for an attacker to brute force the password.
They try to run some other attack on you. For something you dont have.
If more servers use the tool, they waste attacker's time. A bit like herd immunity
Look up post-exploit mitigations, such as ASLR and pointer authentication. These are mechanisms that only become relevant when software has already been breached. In most cases, they cannot entirely prevent further progress by the attacker, just make it significantly harder.
Similar principle (only on the other end).
Security through obscurity is somewhat helpful even though it can be defeated. Take camouflage and honeypots for example. It would probably be unwise to use this without a thorough audit of the code however.