I made a purchase yesterday from Meta (Oculus). A few minutes after payment, I received an email asking to click to confirm it was me.
It came from verify@verification.metamail.com, with alert@nofraud.com cc. All red flags for phishing.
I googled it because it had all the purchase information, so unless a malicious actor infiltrated Meta servers, it has to be right. And it was, after googling a bit. But why do they do such things?i would expect better from Meta.
> i would expect better from Meta
I'm surprised you would expect better.
Everything I hear about their processes, everything I experience as a user, says their software development is all over the place.
Uploading a video on mobile web? I get the "please wait on this site" banner and no sign of progress, never completes. An image? Sometimes it's fine, sometimes it forgets rotation metadata. Default feed? Recommendations for sports teams I don't follow in countries I don't live in. Adverts? So badly targeted that I end up reporting some of them (horror films) for violent content, while even the normal ones are often for things I couldn't get if I wanted to such as a lawyer specialising in giving up a citizenship I never had. Write a comment? Sometimes the whole message is deleted *while I'm typing* for no apparent reason.
Only reason I've even got an account is the network effect. If the company is forced to make the feed available to others, I won't even need this much.
If they stopped caring about quality of their core product, what hope a billing system's verification emails?
Yes, but to receive a message that is not from them after a transaction you just did with them is quite bad.
As a related anecdote. On the phone with amazon refund department I was told to wait for a callback. Got a callback from a number that i obviously didn't get to vet. Then on that call they asked for my email address to email me an Amazon sign in page.
I almost just hung-up because you have the rep urging you to do it and I'm trying to vet that every link is what it says it is before I enter any Amazon info. The cookies from my already signed in session did not apply to thia SSO either. Ended up working out in the end.
I could not believe they have the same flow as the scammers do. This is the same company that regularily sends out warning emails about phishing to me. Go figure.
> I could not believe they have the same flow as the scammers do.
There's a name for this. Scamicry [0].
I have a home equity line of credit and every time I move to a new address, the anti-fraud department calls me and ask me to verify my address. I was rude af to them the first time because I was convinced they were scammers.
Meanwhile every time I am expecting a package via USPS or DHL, without fail, I get a scam text message about my incoming package. I never get them when I am not expecting a package. This is using a variety of devices, web shops, etc. Somewhere along the way, there is a data stream being sold or leaked.
>I never get them when I am not expecting a package.
For what it's worth, I get scam messages claiming to be about usps, dhl, et cetera even when I'm not expecting a package. Recently, I have a couple claiming to be about a package failing to clear customs (but if I just pay a quick fee...).
Looking at what No Fraud does [0], it sounds like Meta has either spun off the first party hardware store from their usual infra, or straight asked a third party to deal with it, and to insulate their main business they split the email domains.
Most companies are already splitting domains for customer and corporate communication, that's a step in the same direction.
While you're right it sounds fishy as hell, it's also mildly common IMO and understadable, especially when e-commerce is not the main business, and could be a reflection of how anti-phishing provisions are pushing companies to be a lot more protective of the email that comes from their main domain.
It's only understandable because no one has standards.
If I talk to Peter, Paul has no business getting any information about that or discussing it with me, untl Peter introduces me to Paul.
We teach our kindergarteners this rule!
For better or worse, we've been in that world for a long time really.
If I ask my bank for a debit/credit card, they'll pass my request to another partner which will do my background check and potentially contact me for additional info.
If I order a delivery from IKEA it will be probably handled by some local company I'll have no idea how precisely they're bound to IKEA. Some complete stranger will be at my doorstep with a truck waiting behind.
There might be some mention of involved third parties in the contracts, but we usually don't read them.
So we used to get random phone calls from unknown numbers claiming to be associated with a reputable entity, and be actually legit even as it sounds completely fishy.
> But why do they do such things?
In my experience it's because getting a subdomain set up inside large companies is a MAJOR bureaucratic nightmare, whereas registering a new domain is very easy.
I experienced the exact same thing when I bought the Flipper Zero. A "hacker device" and the email communication following the sale being made was straight out of a phishing email campaign book. I don't remember the details, it has been a while, but it was wild how sketchy the emails looked. I hope they have improved the email templates since.
I got way worse. I was fined for leaving an unattended baggage at the train station for a bit. The fine came through an SMS message redirecting to a domain which I had to whois to verify was owned by the train company…
Couldn't an attacker put in bogus whois data?
Is there anything which validates that the information from whois is actually accurate?
I’m not sure about that. I also searched for other indicators that the link was actually valid.
You left your ID in the baggage?
I left it long enough for it to be discovered but I was around. I wasn't fined on the spot for some reasons and received the fine later.
My client is sending communication and security related guidance in either an external domain or with pdf files. Every quarter they do internal phishing tests. I have no words.
It's always infuriating getting email from Amazon or my bank "here's signs of potential phishing emails/texts" that doesn't include an exhaustive list of every email address and phone number that that organization will try to contact me from. That should be table stakes when it comes to phishing avoidance, and it's something that can only be done by the business, not the customer.
Yes, like you say, there's always the chance that someone hijacked an official domain - that's where other things like a formal communication protocol ("we will never ask for your password", "never share 2FA codes", "2FA codes are separate from challenge-response codes used for tech support") and rules of thumb like "don't click on shortened links" come in. Defense in depth is a must, but the list of official addresses should be the starting point and it isn't.
Al legit bank will NEVER legitimately call you, except to say "call us back at the number on your card" . Caller ID is not secure.
I've had (presumably the fraud department of) AmEx ring me and ask for personal verification details over the phone before they'll even speak to me about ANY details (even just if this is ringing about fraud etc, or how urgent the phone call is), on more than one occasion. Even though I was pretty confident it was a legitimate call (typically an email notification arrives from them about some odd activity at the same time, or it's whilst I'm making a payment), I decline because surely this is exactly the same as what scammers would do?
Mine has a few times, without the "call us back". So far it's been the fraud department when I made an unusual payment, and have also occasionally gotten "how are we doing?" courtesy calls.
I have confirmed the fraud department one was legitimate, but haven't bothered with the others.
Thanks for the correction!
My bank doesn't tell me that. It's this kind of incompetence and lack of responsibility on their part that's leading to scams and phishing being so unnecessarily successful.
I have a HELOC and every time I move, their fraud dept calls me. And to call them back is not on their main bank number. It was super sketchy but legit.
I noticed Substack recently switched from "click this link to log in" to "we're emailing you a code; enter the code to log in". Wish other companies would follow this approach.
You should check your browser extensions!
That, after thirty years, email security still depends on the wisdom of individuals not clicking the wrong link, is appalling.
The situation involves institutions happy to opaque links to email as part of their workflow. What could change this? All I can imagine is state regulation but that also is implausible.
I just received a corporate IT security training link. From an external address and with a cryptic link. After a previous training which asked us not to trust external emails (spoofable) especially not with unknown links.
IT wasn’t amused when I reported it as phishing attempt.
Haha - amazing. I've had the same thought, and I'm sure the scammers have too.
I always do report those.
In a previous client,the CIO complained about the low click rate for their security training, every one thought it was some spams.
Email and web browsing relies on “deny lists” rather than “allow lists”. So anything goes but you block bad addresses, rather than nothing until you get permissions/trust/credibility. This helped growth of all the networks but means indefinite whack a mole.
I think (but am not sure) that something using trust networks from the ground up would be better in the long term. Consider anything dodgy until it has built trust relationships.
Eg email servers can’t just go for it. You need time to warm up your IP address, use DKIM etc. People can’t just friend you on FB without your acceptance so it’s a lot safer than email, if still not perfect. A few layers of trust would slow bad actors down significantly.
A trust network wouldn’t be binary. Having eg a bunch of spam accounts all trust each other wouldn’t help getting into your social or business network.
Thoughts from experts?
> Email and web browsing relies on “deny lists” rather than “allow lists”. So anything goes but you block bad addresses, rather than nothing until you get permissions/trust/credibility.
But this is fundamental to an open Internet. Yes going whitelist-only would stop bad actors but it would also hand over the entire internet to the megacorps with no avenue for individual success.
I don’t think that’s necessary. We could create open trust networks/protocols that don’t rely on megacorps. In fact it’s probably exactly the megacorps who wouldn’t want this to happen because they benefit from the relative trust they have on their closed environments.
Eg certs. Let’s Encrypt equivalent for credibility, where I can trust you as we interact more, and borrow from your trust networks. Send spam and you reduce your cred. (Letscred.com is available right now if anyone is very bored :)
Gotta be tested very carefully so you don’t end up with a black mirror episode, of course.
Email and browsers shouldn't be glibly equated.
Email as it is presently is a constant opening to phishing and spear fishing. Browser exploits are common too but it's harder (not impossible) to make them personal. And phishing doesn't have to rely on a browser exploit - a fake login page is enough.
It's logical to have a whitelist (or disallow) email links but still allow browsers to follow links.
The same is true for operating systems. Why don't they sandbox properly?
We have sandboxing on mobile apps. Why can't we have the same for desktop?
> Why can't we have the same for desktop?
Morally? No reason why, and people are working on it (slowly).
Practically? Because sandboxing breaks lots of things that users and developers like, such as file picking (I hate snaps), and it takes time to reimplement them in a sandbox in the way that people expect them to work. If it requires the developers' cooperation, then it's even slower, because developers have enough APIs to learn as it is.
And to the extent you mitigate some of those user complaints (as flatpak etc. are doing) you are basically re-opening the exact same holes that you developed the sandbox to get away from
Asking for fully bug free software is nice but unrealistic. Browsers are ostensibly somewhat sandboxed too but there are always new zero-days 'cause browsers are essentially OSes with many moving parts.
However, it reasonable to expect a single hole to be fixed. The "email hole" has been discussed for decades but here we are.
Email is still the running blood of the internet. While we mostly get away with Slack and others for in-group communication, anything going outside, especially to customers, still goes through emails.
At that scale, expecting a core issue to be quickly (or ever) fixed is just unrealistic. I honestly wonder if fundamentally it will ever be fixed, or if instead we get a different communication path to cover the specific use cases we do care about security.
PS: the phone is now 2 century olds, and we sure couldn't solve scamming issues...
After all these years, Microsoft is finally rolling out win32 app isolation, so maybe we are finally on the good path...
Developers initially revolted against Microsoft UWP and Mac App Store.
Not because they isolated the applications though! Because they were shit, and that's not a requirement.
Walled garden is not a substitute for security
APPX (the installer format used by the windows store) and its successor MSIX contain decent security improvements, including filesystem and registry virtualisation and a capability-based permission system.
After the limited success of the windows store you can now get the same in standalone installers. It has been adopted by approximately nobody
A good start would be ditching HTML in email. Plain text is perfectly suitable for non-marketing emails (and marketing emails are just chaff at this point anyways).
I’ll die on this hill.
One word deserves so much blame for the current state of the internet: marketing
> A good start would be ditching HTML in email.
How would that help? You can put links in plain text.
It's all about attack surface and HTMLs attack surface is huge. HTML in email is strictly unnecessary. Name another popular messaging tool that allows you to craft custom HTML messages. Text, Whatsapp, social posts, Snapchat, etc.
The only people who want to send HTML emails are marketers, advertisers, trackers, scammers, hackers, and that clueless manager who wants the cornflower blue background. (most of these actors are the same people, except for that last one).
You can be perfectly spearphished by a plaintext email, sent by your “colleague”, mentioning some current issue at work, and asking to verify some ticket
or by your “friend” mentioning a highly personal issue that only you two were supposed to know, asking you to phone someone on their behalf
or by your “relative”, etc.
Security is eliminating attack vectors one at a time. HTML in email is a gigantic hole. We don’t throw up our hands and say “whelp this only solves 90% of issues. Guess we don’t bother.”
> I’ll die on this hill.
Same. I found a setting in legacy outlook to force all e-mails to be in plain text. So every corporate email I reply to, converts the product owners html formatted emails into junk.
Gives me a little joy that the e-mail they worked so hard on gets mangled by my outlook replies :)
Technologically, email aliases have been working wonders for me in personal use. No idea if it could be rolled out effectively for nontechnical users at an organizational scale though, even with automation.
It also does little against compromised mailboxes - heck, a sufficiently advanced spear fish might even have better chances if the user misunderstands the security improvements this would provide.
But I think other than this, there's not much else to fix. Some people are malicious, others get compromised. No fixing that.
I think for the vast majority of people it doesn't depend on not clicking links.
Chrome 0-days are expensive and aren't going to be wasted on the masses. They'll be sold to dodgy middle eastern countries and used to target journalists or whatever.
If you aren't a high value target you can click links. It's fine.
You can have all of this fancy security. But at the end of the day, the weakest and most vulnerable link in the chain is the human.
I blame Microsoft. As a consumer OS its default stance should be no, the user does not intend to grant god permissions to this embedded or external script when they clicked it. Instead the user should have been challenged with a dialog, do you want to install this App and then execute?
To which everybody will click yes. They have been conditioned by too much half-baked crap out there that requires it and the need to go on with their lives instead of having tp start investigating things they anyway don't have a clue about (and don't want to, not being IT folks).
This is why you don't daily drive a local admin and leave UAC enabled. If you were using an unprivileged account, you'd be getting UAC prompts.
This lines up well with the success rates I have seen from expert phishers. When I worked at a certain well known company with strong security, a demon called Karla would succeed at spearphishing a bit over 50% of the security team.
AI now means much less skilled people can be as good as she was. Karla as a Service. We are doomed.
What defines a successful spear phishing? Is it just clicking a link?
My process when I see a sketchy email is to hover over the links to see the domain. Phishing links are obvious to anyone who understands how URLs and DNS works.
But working for a typical enterprise, all links are “helpfully” rewritten to some dumbass phishing detection service, so I can no longer do this.
At my current company I got what I assumed was a phishing email, I hovered over the links, saw they were pointing to some dipshit outlook phishing detection domain, and decided “what the hell, may as well click… may as well see if this phishing detection flags it” [0]…
… and it turns out it was not only not legit, but it was an internal phishing test email to see whether I’d “fall for” a phishing link.
Note that the test didn’t check if I’d, say, enter my credentials into a fraudulent website. It considered me to have failed if I merely clicked a link. A link to our internal phishing detection service because of course I’m not trusted to see the actual link itself (because I’d use that to check the DNS name.)
I guess the threat model is that these phishers have a zero-day browser vulnerability (worth millions on auction sites) and that I’d be instantly owned the moment I clicked an outlook phishing service link, so I failed that.
Also note that this was a “spear phishing” email, so it looked like any normal internal company email (in this case to a confluence page) and had my name on it. So given that it looks nearly identical to other corporate emails, and that you can’t actually see the links (they’re all rewritten), the takeaway is that you simply cannot use email to click links, ever, in a modern company with typical infosec standards. Ever ever. Zero exceptions.
- [0] My threat model doesn’t include “malware installed the moment I click a link, on an up to date browser”, because I don’t believe spear phishers have those sort of vulnerabilities available to burn, given the millions of dollars that costs.
Problem is Outlook now obfuscates the shit out of links, something something safesearch or along those lines. When I hover over a link, I now have no idea where it wants to take me unless I copy and paste it and look through the 500 character link to find where it actually wants to take me.
It gets worse, if you try to see the url from a phone, there is a good chance it will load the page for you to show a preview. They think it's helpful.... that motherfucking preview on the iPhone forced me to spend a full hour in training because they think I clicked on the link.
Now I send all of these types of email to spam and don't give a fuck. Anything "internal" with a link to click goes to spam unless it's directly from my boss. Turns out 99% of it is not that important.
"The cost-effective nature of AI makes it highly plausible we're moving towards an agent vs agent future."
Sounds right. I assume we will all have AI agents triaging our emails trying to protect us.
Maybe we will need AI to help us discern what is really true when we search for or consume information as well. The amount and quality of plausible but fake information is only going to increase.
"However, the possibilities of jailbreaks and prompt injections pose a significant challenge to using language models to prevent phishing."
Gives a hint at the arms race between attack and defense.
I don't think that there will necessarily be an arms race. Some security problems are deterministically solvable and don't need AI.
For instance, there is a very good classical algorithm for preventing password brute-forcing - exponential backoff on failure per IP address, maybe with some additional per-account backoff as well. Combined with sane password rules (e.g. correct horse battery staple, not "you must have one character from every language in Madagascar), make password brute-forcing infeasible, and force attackers to try other approaches - which in the security world counts as success. No AI needed.
This story is extra hilarious to me because a threat actor has made a service named karla.pw that lets you harvest info from infostealers and creds dumps.
Is this the same Karla as in Fight Club?
I believe that was Marla.
you are correct, not sure if this is the origin story from before she changed her name.
They built their phishing emails using data scraped from public profiles. Fascinating.
I have to wonder if, in the near future, we're going to have a much higher perceived cost for online social media usage. Problems we're already seeing:
- AI turning clothed photos into the opposite [0]
- AI mimicking a person's voice, given enough reference material [1]
- Scammers impersonating software engineers in job interviews, after viewing their LinkedIn or GitHub profiles [2]
- Fraudsters using hacked GitHub accounts to trick other developers into downloading/cloning malicious arbitrary code [3]
- AI training on publicly-available text, photo, and video, to the surprise of content creators (but arguably fair use) [4]
- AI spamming github issues to try to claim bug bounties [5]
All of this probably sounds like a "well, duh" to some of the more privacy and security savvy here, but I still think it has created a notable shift from the tech-optimism that ran from 2012-2018 or so. These problems all existed then, too, but with less frequency. Now, it's a full-pressure firehose.
[0]: https://www.wsj.com/politics/policy/teen-deepfake-ai-nudes-b...
[1]: https://www.fcc.gov/consumers/guides/deep-fake-audio-and-vid...
[2]: https://connortumbleson.com/2022/09/19/someone-is-pretending...
[3]: https://it.ucsf.edu/aug-2023-impersonation-attacks-target-gi...
[4]: https://creativecommons.org/2023/02/17/fair-use-training-gen...
[5]: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
It's all part of the grand enshittification. Trust in anything Internet is gonna erode without any auth, anonymous internet will slowly die.
This is one of the terrifying, probably already happening threats presented by current LLMs.
Social engineering (and I include spearphishing) has always been powerful and hard to mitigate. Now it can be done automatically at low cost.
While I broadly agree with the concerns about using LLMs for "commoditized", large-scale phishing, isn't the study a bit lacking? Specifically, "click through" is a pretty poor metric for success.
If I receive a unique / targeted phishing email, I sure will check it out to understand what's going on and what they're after. That doesn't necessarily mean I'm falling for the actual scam.
I hate the InfoSec generated phishing tests.
They all pass DKIM, SPF, etc. Some of them are very convincing. I got dinged for clicking on a convincing one that I was curious about and was 50/50 on it being legit (login from a different IP).
After that, I added an auto delete rule for all the emails that have headers for our phish testing as a service provider.
Did you report phishing before you clicked? If not, you deserve to be dinged.
I think the idea is that it is a numbers game. If you have a way to inexpensively generate a much higher click-through rate than doing it manually, your success rate will go up with a lower investment.
The average SMB company has people that act very differently with their personal email due to they need to protect their checking account that has $400 in it. But have no such measurement or reluctance with work email. They are "clickers". There are also "repeat clickers", "serial clickers", and "frequent clickers". The only thing this study is doing is automating a small part of the profiling and preparation.
There was yet another study titled "Why Employees (Still) Click on Phishing Links" by NIH. (2020) https://pmc.ncbi.nlm.nih.gov/articles/PMC7005690/
Given the pathology, clicking is the visible and obvious symptom.
How did they generate these? If I try with ChatGPT then it refuses, citing a possible violation of their content policy. Even when I tell it that this is for me personally, it knows who I am, and that it's just for a test -- which obviously I could be just pretending, but again, it knows who I am but still refuses.
If you're using ChatGPT directly as opposed to the API, the system prompts could be driving it.
Also, in section 3.6 of the paper, they talk about just switching fishing email, to email helps.
Or said differently, tell it that it's for a marketing email, and it will gladly write personalized outreach
You can host open source llm offline.
They team specifically "use AI agents built from GPT-4o and Claude 3.5 Sonnet". The question here is "how did they manage to do so" not "what else can do it with less effort".
As those two are run by companies actively trying to prevent their tools being used nefariously, this is also what it looks like to announce they found an unpatched bug in an LLM's alignment. (Something LessWrong, where this was published, would care about much more than Hacker News).
There are many uncensored open-weights models available that you can run locally.
I'm aware what I can do. I was wondering how they did it.
If the study was done with target consent, it might be biased with inflated click-through rates due to the targets expecting benign well-targeted spear-phishing messages.
If it was done without target consent, it would certainly be unethical.
They got IRB approval. The authors framed the emails as part of a marketing study involving "targeted marketing emails."
It seems like “the subject clicked a link in an email” is equated to “being phished”, but I’m not certain that is a good definition.
I'm certain that someday I'm going to be dinged on a really shallow kind of work security test because I decided to investigate a link into a sandbox/honeypot environment.
These phish testing companies always stick a header (X-PHISH-TEST or some such) on the email so the email server can white-list -- easy to just Outlook blackhole filter anything with that header after you've seen one test.
What stops an attacker from abusing the same header?
It could be kinda-secure if the header had to have a payload which matched a certain value pre-approved for a time-period. However an insider threat could see the test going on and then launch their own campaign during the validity window.
We had an email come in from a pension combining processor, the url they gave so that you could add information about someone's pension was similar to:
employer.git.pension-details.vercell.app
Why do these companies make this stuff so hard!?
It's probably more a reflection on me than the authors, but one thing that stood out for me in this paper is that there is a spelling mistake in the conclusion ("spar phishing"), which immediately made it come across as poorly-reviewed and got me wondering if there are other mistakes that are outside of my expertise to identify.
To my understanding, papers on arXiv are NOT reviewed, which is the primary purpose of the platform. The idea, as I see it, is to allow papers to be published more quickly, at no cost, and freely accessible to everyone online.
I don't think this is a bad thing, because even with peer-reviewed papers, there are cases where they can be completely fabricated but still get published. You shouldn't rely on a single paper alone and should do further research if you have doubts about its content.
I’ve always figured those guardrails wouldn’t really hold up, but hearing that AI-based phishing can be 50 times more cost-effective than manual attacks is a serious wake-up call. We might have to rethink everything from spam filtering to overall threat detection to step up our AI defenses game.
I believe I was the target of employment-flavored spear phishing a few months ago. Could have been a researcher like the OP.
- 3 new email chains from different sources in a couple weeks, all similar inquiries to see if I was interested in work (I wasn't at the time, and I receive these very rarely)
- escalating specificity, all referencing my online presence, the third of which I was thinking about a month later because it hit my interests squarely
- only the third acknowledged my polite declining
- for the third, a month after, the email and website were offline
- the inquiries were quite restrained, having no links, and only asking if I was interested, and followed up tersely with an open door to my declining
I have no idea what's authentic online anymore, and I think it's dangerous to operate your online life with the belief that you can discern malicious written communications with any certainty, without very strong signals like known domains. Even realtime video content is going to be a problem eventually.
I suppose we'll continue to see VPN sponsorships prop up a disproportionate share of the creator economy.
In other news Google routed my mom to a misleading passport renewal service. She didn't know to look for .gov. Oh well.
Organizations should be spear phishing their employees using LLMs to test defenses and identify gaps in security awareness training and processes.
It's worth noting that "success" here is getting the target to click a link, and not (for example) handing over personal information or credentials.
Imagine if models were trained for this purpose using OS-INT and reinforcement learning instead of repurposing a general model and using generic prompts from a somewhat safe guarded LLM?
That's where we're headed. Bad actors paying for DDoS attacks is more or less mainstream these days. Meanwhile the success rate for phishing attacks is incredibly high and the damage is often immense.
Wonder what the price for AI targeted phishing attacks would be? Automated voice impersonation attempts at social engineering, smishing, e-mails pretending to be customers, partners, etc. I bet it could be very lucrative. I could imagine a motivated high-schooler pulling off each of those sorts of "services" in a country with lax enough laws. Couple those with traditional and modern attack vectors and wow it could be really interesting.
"Look, humans will adapt to the ever-increasing and accelerating nightmares we invent. They always have before. Technology isn't inherently evil, its how it is used that can be evil, its not our fault that we make it so accessible and cheap for evil people to use. No, we can't build safeguards, the efficient market hypothesis leaves no room for that."
Mostly accurate, except I would change the last sentence to:
"We take safety very seriously. Look how much safer our SOTA model is based on our completely made up metrics. We will also delay releasing these models to the public until we ensure they're safe for everyone, or just until we need to bump up our valuation, whichever comes first."
they keep saying participants, but am i missing where the targets opted into participation?
Grandma is fkd
We are all grandma.
I've had coworkers, and so has my spouse, who has fallen for the "iTunes gift cards for the CEO" trick. I think grandma is no longer an accurate stand-in for a tech-unsavvy person who is vulnerable to spearphishing attempts.
I get about 10 emails per week from my "CEO" to pay an invoice. I've even gotten a few text messages. Oddly, the emails never have an attachment. Is this because Google (Workspace account) is removing it?
I've always wondered if it is 10 different orgs doing the campaigns, or the same one. If the same one, why send 10?
This is somehow not considered to be an active warzone when it clearly is. The slightest misstep could ruin your life.
> I've always wondered if it is 10 different orgs doing the campaigns, or the same one. If the same one, why send 10?
My bet is that one criminal group is selling software to enable this, with very similar default settings. Then ten groups by the software, and each one ends up sending you a very similar email.
I would argue that pretty much everyone could be socially engineered into dropping their guard for a moment.
this research actually demonstrates that AI will reduce the phishing threat long-term, not increase it. Yes, the 50x cost reduction is scary, but it also completely commoditizes the attack vector.
I'm sorry but I'm not sure I follow. How do you mean that the commoditization of spear fishing will reduce phishing threats long term? To me that implies the exact opposite would happen?
In the future you'll create a new email account only to receive 200,000 phishing messages per second. At that point, you'll close the account because its completely useless and thus be protected from phishing. :P
Exactly
Sounds good :D