If companies practiced data minimisation, and end-to-end encrypted their customers' data that they don't need to see, fewer of these breaches would happen because there would be no incentive to break in. But intelligence agencies insist on having access to innocent citizens' conversations.
> But intelligence agencies insist on having access to innocent citizens' conversations.
That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Until the data breaches lead to serious $$$ impact for the company, the impact of these breaches will simply be waved off and pushed down to users. ("Sorry, we didn't protect your stuff at all. But, here's some credit monitoring!") Even in the profession of software development and engineering, very few people actually take data security seriously. There's lots of talk in the industry, but also lots of pisspoor practices when it comes to actually implementing the tech in a business.
Companies already pay for cyber insurance because they don't want to take on this risk themselves.
In principle the insurance company then dictates security requirements back to the company in order to keep the premiums manageable.
However, in practice the insurance company has no deep understanding of the company and so the security requirements are blunt and ineffective at preventing breaches. They are very effective at covering the asses of the decision makers though... "we tried: look we implemented all these policies and bought this security software and installed it on our machines! Nobody could possibly have prevented such an advanced attack that bypassed all these precautions!"
Another problem is that often the IT at large enterprises is functionally incompetent. Even when the individual people are smart and incentivised (which is no guarantee) the entire department is steeped in legacy ways of doing things and caught between petty power struggles of executives. You can't fix that with financial incentives because most of these companies would go bankrupt before figuring out how to change.
I don't see things improving unless someone spoon-feeds these companies solutions to these problems in a low risk (ie. nobody's going to get fired over implementing them) way.
The typical IT department in a large corporation is way too big to have reasonable visibility into what it manages. There's no way to build reasonable controls that work out when you have 50K programmers on staff. It's purely a matter of size.
Often the end result is having just enough red tape to turn a 2 week project into an 8 month project, and yet not enough as to make sure it's impossible for someone to, say, build a data lake into a new cloud for some reports that just happen to have names, addresses and emails. Too big to manage.
Which gets back to the original point, that the real answer is to minimize how much data is held in the first place. Controls will always be insufficient to prevent breaches. Companies and organizations should keep less data, keep it for less time, and try harder to avoid collecting PII in the first place.
I don't disagree with you but as someone who has thought a moderate amount about data security at a "bigco", I will point out something I haven't seen people really talk about...
Audit trails (of who did/saw what in a system) and PII-reduction (so you don't know who did what) are fundamentally at odds.
Assuming you are already handling "sensitive PII" SSNs/payroll/HIPPA/creditcard# data appropriately, which constitutes security best practice: PII-reduction or audit-reduction?
Let's say the CEO agrees with you and is horrified of any amount of unnecessary data being stored.
How would they then enforce this in a large company with 50k programmers? This was what the previous post was discussing.
Not to mention, a lot of this data is necessary. If you're invoicing, you need to store the names and many other kinds of sensitive data of your customers, you are legally required to do so.
Culture change. The CEO can push for top down culture change to get people to care about this stuff. Make it their job to care. Engage their passion to care.
It’s not easy, but it can move the needle over time.
That is easier said than done. In order to achieve that effectively every employee that has any relation to data needs to be constantly vigilant in keeping PII to a minimum, and properly secured.
It is often much easier to use an email address or a SSN when a randomly generated id, or even a hash of the original data would work fine.
I'm not saying that we shouldn't put more effort into reducing the amount of data kept, but it isn't as simple as just saying "collect less data".
And sometimes you can't avoid keeping PII.
There's another side to it, which you allude to with the give away of credit monitoring services that data breaches result in. The whole reason the data is valuable is for account takeover and identity theft because identity verification uses publicly available information (largely publicly available, or at least discoverable, even without breaches). But no one wants to put in the effort to do appropriate identity verification, and consumers don't want to be bothered to jump through stricter identity verification process hoops and delays---they'll just go to a competitor who isn't as strict.
So we could make the PII less valuable by not using for things that attract fraudsters.
Hell in this instance, just replacing non EOL equipment that had known vulnerabilities would have gone a long way. We're talking routing infrastructure with implants designed years ago, still vulnerable and shuffling data internally.
The "problem" is noone cares and certainly doesn't want to pay for the costs, especially the end users. That EOL equipment still works, there are next to no practical problems for the vast vast vast vast vast vast vast majority of people. You cannot convince them that this is a problem (for them) worth spending (their) money on.
Even during the best of times people simply do not give a fuck about privacy.
Honestly, if there is a problem at all I would say it's the uselessness of the Intelligence Community when actually posed with an espionage attack on our national security. FBI and CISA's response has been "Can't do; don't use." and I haven't heard a peep from the CIA or NSA.
Until companies are held liable for security failures they could have and should have prevented, there's no incentive for anyone to do anything. As long as the cost of replacing hardware, securing software, and hiring experienced professionals to manage everything is higher than the cost of suffering a data breach companies aren't going to do anything.
I've seen the same thing at previous jobs; I had a lot to do and knew a lot of security issues that could potentially cause us problems, but management wasn't willing to give me any more resources (like hiring someone else) despite increasing my workload and responsibilities for no extra pay. Surprise, one of our game's beta testers discovered a misconfigured firewall and default password and got access to one of our backend MySQL servers. Thankfully they reported it to us right away, but... geez.
>The "problem" is noone cares and certainly doesn't want to pay for the costs, especially the end users
Well I care. I’d pay a premium to a telco that prioritized security and privacy. But they all are terrible, hovering up data, selling it indiscriminately and not protecting it. If they all suck then the default is to use the cheapest.
It’s definitely why I use Apple devices because I can buy directly from Apple and they don’t allow carriers to install their “junkware”.
That EOL equipment probably shouldn't be EOL though. Part of the blame should go to equipment makers that didn't bother to send out updates to fix the vulnerability in still functional equipment.
Another issue is lack of education/training/awareness among developers.
A BS in CS has maybe one class on security, and then maybe employees have a yearly hour-long seminar on security to remind them to think about security. That isn't enough. And the security team and engineers that put the effort into learning more about security and privacy often aren't enough to guard against every possible problem.
But At&t and their 42,690 partners say they value my privacy :(
They do value your privacy! They just don’t like to share how many cents its worth to them
Apple seems to be willing to spend money on this kinda stuff. But the reason why they do this is because it allows them to differentiate their offering from the others, with privacy being part of the "luxury package", so to speak. That is - their incentive to do so is tied to it not being the norm.
Apple and Google care about this because they handle more customer data and require more customer trust than most companies.
People were shitting a brick over a pretty minor change in photo and location processing at Apple. That’s because they don’t screw up like this.
The point is that Apple specifically goes out of the way to avoid having customer data in the first place.
(Google, on the other hand, is the opposite.)
But, as far as I can tell, the only reason why Apple does this is because privacy these days can be sold as a premium, luxury feature.
I work in internal tools development, aka platform engineering, and this is interesting:
> That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Frankly, any company that says they're a technology or software business should be building these kinds of systems. They can grab FOSS implementations and build on top or hire people who build these kinds of systems from the ground up. There's plenty of people in platform engineering in the US who could use those jobs. There's zero excuse other than that they don't want to spend the money to protect their customers data.
This is not a tools problem, this is incentive and political problem.
Telecoms will not get fined for this breach, or fined at amount that is meaningful, so they are not going to care.
I'm not sure why it's either/or to you. Seems to me like we're talking about the same problem but stated from two different perspectives.
Politics has historically incentivized job creation.
Because you came in acting like Internal Developer Platform would be fix to their problems when it won't be. In fact, I doubt the lack of IDP is their problem.
As SRE, I'm just over everyone running around acting like another tool is going to solve the problem. It's not, incentives need to be present not to be completely terrible at their job.
Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
An IDP is not a secrets management tool or vice versa. IDPs are more like connectors of your internal tools/platforms. Their key metrics have more to do with toil reduction and velocity, but they can certainly solve the kinds of problems that lead to a company thinking they need a group of people focusing solely on reliability.
> Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
I am a SRE. I stopped using that title professionally some time ago and started focusing on what makes companies reach for SRE when the skillset is the same as a platform engineer.
A post I wrote on the subject: https://ooo-yay.com/blog/posts/2024/you-probably-dont-need-s...
After Apple argued for years that a mandatory encryption-bypassing, privacy-bypassing backdoor for the government could be used by malicious entities, and the government insisting that it's all fine don't worry, now we're seeing those mandatory encryption-bypassing, privacy-bypassing backdoors for government being used by malicious entities and suddenly the FBI is suggesting everyone use end-to-end encryption apps because of the fiasco that they caused.
But don't worry, as soon as this catastrophe is over we'll be back to encryption is bad, security is bad, give us an easy way to get all your data or the bad guys win.
The story is a little longer than this. A bunch of folks from academia and industry have been fighting the inclusion of wiretapping mandates within encrypted communications systems. The fight goes back to the Clipper chip. These folks made the argument that something like Salt Typhoon was inevitable if key escrow systems were mandated. It was a very difficult claim to make at the time, because there wasn’t much precedent for it - electronic espionage was barely in its infancy at the time, and the idea that our information systems might be systematically picked open by sophisticated foreign actors was just some crazy idea that ivory tower eggheads cooked up.
I have to admire those pioneers for seeing this and being right about it. I also admire them for influencing companies like Apple (in some cases by working there and designing things like iMessage, which is basically PGP for texts.) It doesn’t fix a damn thing when it comes to the traditional telecom providers, but it does mean we now have backup systems that aren’t immediately owned.
Thats not exactly true. The FCC911 and other government laws require the telcos to have access to location data and record calls/texts for warrants. The problem is both regulatory as well as commercial. It is unrealistic to expect the general public nor the government to go with real privacy for mobile phones. People want LE/firefighters to respond when they call 911. Most people want organized crime and other egregious crimes to be caught/prosecuted, etc. etc.
Nonsense. I kindly informed my teenage niece of the fact all her communications on her phone should be considered public, and the nature of Lawful Interception, and the tradeoffs she was opted into for the sakenof Law Enforcement's convenience.
She was not amused or empathetic to their plight in the slightest. Population of at least 2 I guess.
Make that population of 3. I'm not a fan either. But I'm also realistic. I treat the phone as what it is: malicious spyware. But I realize that most people want the convenience and the safety (of sorts) of dialing 911 and getting the right dispatch..
If law enforcement actually did their jobs, this would be more understandable. I don’t know about you or others’ experiences, but when I’ve called the police to report a crime (e.g. someone casually smashing car windows at 3p in the afternoon and stealing anything that isn’t bolted down), they never show up and usually just tell me to file a police report which of course never gets actioned. Seems pretty obvious to me that weakening encryption/opsec to “let the good guys in” is total nonsense and that there are blatant ulterior motives at play. To be clear I’m a strong proponent of good security practices and end to end encryption
There's not nearly enough public information to discern whether or not this had anything to do with stored PII or lawful interception. All we know is that they geolocated subscribers.
The SS7 protocol provides the ability to determine which RNC/MMC a phone is paired with at any given time: it's fundamental to the nature of the functioning of the network. A sufficiently sophisticated adversary, with sufficient access to telephony hardware, could simply issue those protocol instructions to determine the location.
> and end-to-end encrypted their customers' data
Somewhat of a tangent: does anyone have any resources on designing/implementing E2E encryption for an app where users have shared "team" data? I understand the basics of how it works when there's just one user involved, but I'm hoping to learn more about how shared data scenarios (e.g. shared E2E group chats like Facebook Messsenger) are implemented.
Matrix has a detailed write up intended for client implementers: https://matrix.org/docs/matrix-concepts/end-to-end-encryptio...
It should give you some ideas on how it's done.
Thanks! I've added it to my reading list.
You want to search for the "double rachet protocol", which brings up articles like this[1].
[1] https://nfil.dev/coding/encryption/python/double-ratchet-exa...
Exactly the kind of thing I was looking for, thank you! And thanks for the tip about "double ratchet protocol," that helps a ton.
Intelligence agencies may use that data, but there are plenty of financial incentives to keep that data regardless. Mining user data is a big business.
The best solution to privacy is serious liability for losses of private customer data.
Leak or lose a customer's location tracking data? That'll be $10,000 per data point per customer please.
It would convert this stuff from an asset into a liability.
All of these claims for serious fines, yet no indication of where the fine is to be paid. Fines means the gov't is getting the money, yet the person whose data was lost still gets nothing. Why does the person that was actually harmed get nothing while the gov't who did nothing gets everything?
Even better - give the customer the $10k.
This exactly. Data ought to be viewed as fissile material. That is, potentially very powerful, but extremely risky to store for long periods. Imposing severe penalties is the only way to attain this, as the current slap on the wrist/offer ID theft/credit monitoring is an absurd slap in the face to consumers as we are inundated with new and better scams from better equipped scammers everyday.
The current state is clearly broken and unsustainable, but good luck getting any significant penalties through legislation with a far-right government.
Yeah, take an externality, make it priceable, and then "the market" and amoral corporations will start reacting.
Same principle as fines for hard-to-localize pollution.
Corporations' motivations rarely coincide with deep, consistent systems strategy, and largely operate reactively and in a manner where individuals get favorable performance reviews for adding profitable features or saving costs.
They are appropriately motivated in this case, carriers would surely rather have no idea whatsoever about the data they are carrying. The default incentive is they'd really rather avoid being part of any compliance regimes or law enforcement actions because that sort of thing is expensive, fiddly and carries a high risk of public outcry.
If they had the option the telecommunication companies would love to encrypt traffic and obscure it so much that they have no plausible way of figuring out what is going on. Then they can take customer money and throw their hands up in honest confusion when anyone wants them to moderate their customer's behaviour.
They don't because that would be super-illegal. The police and intelligence services demand that they snoop, log and avoid data-minimisation techniques. It is entirely a question of regulatory demand and time that these sort of breaches happen; if the US government demands the data then sooner or later the Chinese government will get a copy too. I assume that is a trade off the US government is happy to make.
While I agree, isn't this a degree of victim blaming? They were hacked by a state actor and every thread ignores the elephant in the room.
They had a backdoor. Someone used the backdoor. You stick your hand in a running lawnmower and it gets chopped off. Nobody is surprised.
Who put the backdoor there? The US government did.
No.
A telecommunications carrier may comply with CALEA in different ways:
The carrier may develop its own compliance solution for its unique network.
The carrier may purchase a compliance solution from vendors, including the manufacturers of the equipment it is using to provide service.
The carrier may purchase a compliance solution from a trusted third party (TTP).
https://www.fcc.gov/caleaCALEA is a mandate from the U.S. Government to backdoor all telecom infrastructure for U.S. LE and intelligence purposes.
> But intelligence agencies insist on having access to innocent citizens' conversations.
Intelligence agencies also stockpile software vulnerabilities that they don't report to the vendor because they want to exploit the security flaw themselves.
We'll never have a secure internet when it's being constantly and systematically undermined.
Yes, but spies are going to spy, so we should focus on getting software built to have security by design and not just keep out-sourcing to the cheapest programmers that don't even know what a sql injection is.
Currently, with proprietary software, there's an incentive for companies to not even acknowledge bugs and it costs them money to fix issues, so they often rely on security through obscurity which is not much of a solution.
Meanwhile US banks, Venmo, PayPal, etc all insist on using "real" phone numbers as verification.
Funny that Venmo won't let me use a voip number, but I signed up for Tello, activated an eSIM while abroad and was immediately able to receive an SMS and sign-up. For the high barrier cost of $5. Wow, such security. Bravo folks.
These stem from a requirement to know you as a person in some verifiable way. These are legal and regulatory requirements but the laws and requirements are there to ensure finserv can meaningfully contain criminal activity - fraud, theft, money laundering, black market, terrorism financing, etc. It turns out by far the most effective measure is simply knowing who the principals are in any transaction.
Some companies have much lower thresholds for their KYC, but end up being facilitators of crime and draw scrutiny over time by both their more regulated partners and their governments.
I’d note that the US is relatively lax in these requirements compared to Singapore, Canada, Japan, and increasingly the EU. In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
> know you as a person in some verifiable way .. the laws and requirements are there to ensure .. knowing who the principals are in any transaction.
Except that person you’re responding to explains succinctly how this is security theater that accomplishes little and ultimately is just a thinly veiled tactic for harassing users / coercive data collection. And the person above that is commenting that unnecessary data collection is just an incentive for hackers.
Comments like this just feel like apologism for bad policies, at best. Does anyone really think that people need to be scrutinized because most money laundering is small transactions from individuals, or, is it still huge transactions from huge customers that banks want to protect?
Let me make it even more clear. I registered from [South American country]. Called from a US Voip. Told them I was in [US State]. They called my bluff. I clarified exactly what I was doing and they immediately approved the line. Took less than a minute.
I’m not sure I claimed simple phone number collection requirements is necessarily good policy or that it’s effective. I did not that other regimes have more draconian but more effective measure. I was explaining the provenance for such requirements - and that the base motivation is KYC. Being in the industry for a long time from small fintech to massive institutions I’ve never seen any place that’s intentionally harassing or being coercive - in fact the pressure is towards minimization of requirements and easing of onboarding / KYC as much as they can get away with. However this also turns into a farcical underinvestment in UX because management often believes by ignoring the function and turning the thumb screws on their KYC functions they can somehow make it better rather than worse - worse to the extent of appearing harassing and coercive, or worse to the extent of exposing legit users to fraud and hacking.
The issue though boils down to governments don’t want the financial infrastructure in their jurisdiction to allow unfettered crime. I’ve never seen a single government (granted I’ve never seen what happens in extremely oppressive regimes as we don’t generally do business there due to sanctions controls) who actively collects KYC outside of large transactions, the regulations exist to ensure a minimum baseline of KYC so the companies themselves can comply and reduce their own losses and instability as someone is often kiss liable in fraud and in money laundering or sanctions evasion some institution is subject to fines for facilitation.
But to be frank I think very little of what’s done is materially successful against most competent criminals and the consequences of being caught is usually just being blocked until they find a way around. To that end it’s a bit of not security theatre but compliance theatre. On the other hand it does act as a high pass filter as most fraud and financial crime is NOT competent. By and large retail finserv is a minimization effort not a prevention effort.
The regulations that are effective at prevention are usually so restrictive and so difficult to implement that they’re absurd for both the finserv to implement and for the participants to get through the hurdles.
I don’t know there’s any perfect solutions, and what exists is generally dumb, but the intentions are at the core well intended. It’s foolish tho to look at something as complex as financial infrastructure and wave it away as harassment and coercion rather than well intentioned incompetence.
Phone number is not an identity document, and you can rent a number cheaply on a black market. Also, there should be no verification for small amounts of money. We can use cash anonymously, why we cannot transfer money anonymously?
> In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
When vtuber-esque deepfakes become trivial for the average person, I wonder what the next stage in this cat-and-mouse becomes. DNA-verficiation-USB-dongles?
Why do straight to dystopia when notary publics exist?
Online notaries have been a thing for awhile now. Don’t worry, we can still have dystopias with Notaries.
The DNA-collecting businesses have already been hacked.
Maybe you could just, you know, show up to a bank branch? Like people have done for centuries?
Physical businesses? The horror! Won't someone think of the fintechs?
Or what if I live in a rural area and have very few local branch banks available?
I actually had an issue with this and ended up sending a notarized letter by snail mail, since I didn't feel like making a special 1hr each way trip during business hours to the closest branch.
> Or what if I live in a rural area and have very few local branch banks available?
Then you have to be ready to accept that there are advantages and disadvantages to your choice of where you live, and that is one of the latter.
There's a reason rural property is so cheap. It comes with a lot of disadvantages and inconveniences and costs that city-dwellers don't need to pay.
City taxes are a never ending bitch.
There is no right to not be inconvenienced by living in a remote area in any country I’m aware of.
If a country does not strive to make good use of all its land and attempt to better the lives of its people why are there wars? Clearly they're fine with their top 3 cities. /s
Seriously, you see this in any country of any size. Remote may just mean 300km/186mi off coast. Politicians go where the votes are of course, but this just means disregarding rural areas is a self fulfilling prophecy. The more you do it, the more remote they become.
You can, at the same time, verify a person's identity upon opening the account, as you mentioned with documents, and use a TOTP MFA instead of SIM-based authentication. If regulators require SIM-based authn, then it's just bad policy, which should come to no one's surprise when it comes to government regulation. Finally, KYC is for the IRS. The illusion of safety makes a good selling point, though.
US regulators don't normally specify down to 'require SIM-based authn'. Instead they give vague directives that companies have to determine their own implementation for meeting. And the implementation needs to be blessed by corporate AND insurance company lawyers, which too often ends up meaning those lawyers dictate the implementation.
My google voice number is unlikely to be stolen from me, but instead I have to use a 'real' phone number that could be compromised by handing cash to an employee at a store.
One time a company retroactively blocked VOIP numbers, which was really stupid.
> My google voice number is unlikely to be stolen from me
I'd say that with Google, chances are that they just stop offering the service.
When Google Voice was brand new I snagged me a number. (Since lost because I did not respond to a prompt to keep it alive, or something?) I wonder if they anticipated the cost of keeping those around for decades. Managing someone's personal phone number is a solemn commitment that you can't just drop willy-nilly.
The only solemn commitment Google has is to the bottom line.
Aren't they still supporting old Google Voice numbers though? I don't see how they could be making any money on that.
Only US domestic calls are free, international calls have a per-minute charge.
That's one of their older services. I assume they really like the data they get from it.
This is why I like Google Fi. It is much harder to do account takeout over of a Google Fi number compared to most telecos. The attacker would have to take over the Google account which seemed to be harder to do.
I agree, and also use Fi
But, I worry about what happens if I somehow get locked out of the account…
Verifying people after account loss/compromise is hard.
So which would you prefer:
(A) A low-level customer service representative can restore your access, but said representative is arguably susceptible to social engineering and other human weaknesses.
(B) Your account can be protected be physically 2FA key (yubikey), but on the case of loss/compromised account processes for recovery are hard to navigate and may not yield successful recovery?
In the case of (A) you have little security. In the case of (B) you can do a LOT to prevent account loss, but if bad things happen (whether your fault or not) you are locked out by default.
From a privacy point of view, I'm not sure that (B) is such a bad option.
You can mitigate (B) by using your own domain with Google Fi and the basic workspace account. That way, if you are locked out you can switch providers taking your domain with you.
You still loose data stored, phone number, etc.
But you could make the argument you should do backup of cloud services, the same way you do backup of hard drives.
True, but my Google Fi is attached to a free gmail account (because there is NO way to attach it to a Workspace account!!).
For my Workspace account, I backup with Google Takeout every 2 months to Backblaze B2. I also sync (with rclone) My Drive to a local directory, which is weekly uploaded to B2.
We need both, clearly advertised for what they are, and then all everyone can make their own risk calculus.
Just post on socials (that you can still access) about being locked out and then hope for the best?
Well, for now, I still have former co-workers there who can help, but that won’t last forever.
For the most part, the "have a friend at Google" doesn't help anymore. They even tell us googlers to use the external process when our account gets locked.
Because that real phone number is tied to an imei number which can be used to track your historical and real time location from teleco data
And yet it is 'impossible' to police to recover stolen iPhone.
It’s entirely possible. They just don’t care.
Unrelated. Tracking data is service-side, not secret to the phone.
Whatsapp just retroactivelly blocked google voice numbers recently
That's nothing to do with security, just Meta wanting to know everything about you / being annoyed that another company has that data instead of them.
Security of shareholder value!
Knock on wood, mine still works. Please, any Whatsapp/Meta engineers, don't go specifically disable mine now that you read this comment.
How recently?
Blanket Denial is the issue.
A PROCESS for verifying the number isn't used for fraud and allowing use. I don't know, maybe the fact that I've been a customer for YEARS, use that number, and have successfully done thousands of dollars in transactions over a platform without any abnormal issue?
Does Tello require KYC, that is, is the eSIM linked to an actual identity ? As least in Europe (psd2) that’s the key for accepting a phone number as a 2FA method
No KYC with Tello or USMobile.
All of my 2FA Mules[1] are USMobile SIMs attached to pseudonyms which were created out of thin air.
It helps a lot to run your own mail servers and have a few pseudonym domains that are used for only these purposes.
i bought a Tello eSIM to use for my Rabbit R1, am in USA, was not required to provide any KYC, received a (213) LA area code number, recommend Tello so far.
Another cool thing that some companies do: refuse to deal with me because the family business account is in my dad's name, despite me knowing all the correct information to pretend to be my dad.
Like, the only reason I don't answer the phone and say "this is <Dad's name>", is because I'm honest. You'll never keep a bad guy out that already knows all the information that you ask for - he'll just lie and claim to be the business/account owner.
Technically they might be right, because your father might not trust you to access the account, so you need some kind of written permission.
> he'll just lie and claim to be the business/account owner.
He can lie, but he doesn't have another person's passport to prove his lies.
That written permission is worthless unless notarized & verified (which isn't going to happen for ordinary things) because you can just write it yourself.
And you don't need a passport. I've never met a company that will require full KYC-level video-identification with you on every call. You say that you're you (it doesn't matter whether you actually are you), you give them the secret code and they're happy.
> For the high barrier cost of $5. Wow, such security. Bravo folks.
$5 is at least 5x the cost of a voip number. I'm not a bank, but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 on the number than if it was $1 or less.
"... but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 ..."
This is exactly it.
All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
So, when twilio (for instance) refuses to let you 2FA with anything other than tracing back to a real mobile SIM[1] (how ironic ...) it is not to help you - it is designed to slow down abusers.
[1] The "authy" workflow is still backstopped by a mobile SIM.
>All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
>These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
Sure does a great job for all the various online social media places that ostensibly have nothing to do with transacting money, still want my phone number, and still get overrun with spam and (promotion of) scams....
It's a whole bunch of tradeoffs; requiring a working, non-voip phone number does raise the cost for abusers, but it's not enough to make spam unprofitable.
Requiring a deposit would be more direct, but administration of deposits would be a lot of work, and you have an uphill battle to convince users to pay anything, and even if they want to pay, accepting money is hard. And then after all that, some abusers will use your service to check their stolen credit cards.
https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...
Relevant reading.
Basically comes down to: the costs of acceptable levels of fraud < the cost of eliminating all fraud.
There are processes that would more or less eliminate all fraud, but they are such a pain in the ass that we just deal with the fraud instead.
Okay. So let me just pay an "application fee" or some such instead of making me jump through hoops.
I don't care. I know it's a numbers game. I know they don't care about me. But companies absolutely lose my business because of this bullshit.
Also, that is clearly a workaround that took some research to do. Aka you’re probably in the top 1% of the population from a ‘figuring out workarounds’ perspective.
VoIP is so well known (and automated) to do, even at $.10, it would be a magnitude easier to do.
Banks are always slow, and behind the times - because they are risk adverse. That has pros and cons.
It makes me think of linux distros.
there are the ones that closely follow software updates and you get to complain that things are breaking all the time.
and there are the stable distros, now you get to complain how old and out of date everything is.
I still have about $15 of international calling credit on a GV number I hardly use anymore with no option of transferring or using that balance on a different platform like Google's Play store.
Can we talk about how Venmo doesn't even let you login from abroad... And their app doesn't provide a decent error message it just 403s.
The problem is that VOIP numbers, from companies like Bandwidth, are frequently used to perform various frauds. So many financial services ban them because the KYC for real numbers is much better.
I have more bank and credit accounts than the average person, probably. 5 bank accounts, and 8 credits accounts I can remember as active off the top of my head.
Every single one works with GVoice, except Venmo. Chase, Cap1, Fidelity, etc. Not small players.
So while I think you make a fair enough argument for sure, it doesn't seem to be the case when nobody else does it, and makes Venmo seem like a pain in the arse.
My Gvoice number works with Chase, Citi, Discover, AMEX, Capitol One. Does not work with Wells Fargo, despite allowing you to sign up with it. Took a notarized snail mail to fix that one.
In practice, these companies get a phone number I possess for 1-3 months on a travel SIM rather than the VOIP number I’ve steadily maintained for two decades and by which the US feds know me (because they don’t care).
Don't all financial institutions need some real identification with physical address to sign up? Phone numbers / email addresses should be for communication, not tracking.
KYC = know your customer?
Yes, and AML = Anti Money Laundering
Yes.
It has nothing to do with Kentucky’s Yummiest Chicken, if that’s what you were thinking.
Because VOIP requires a verified Google account and phone number, while traditional numbers can, uh, be purchased anonymously at the corner store.
Depends on which country. In places like India that’s not possible. Your cell phone number becomes a de facto identity so they require all kinds of identity documents to get a SIM.
So there's a cottage industry of middle men on the streets who will set you up with a SIM card, or a travel ticket or whatever, for people who don't have identity. (Or in some cases don't want to reveal their identity, but I reckon this is less typical.) Sure, you pay extra for the service, the middle man takes 10%, 30% or 500% and the identity is then with that person---or their fraudulent papers, I don't know how it works in detail.
It's usually the other way around - first countries introduce laws that require ID to buy a cell phone ("because criminals"), and then the phone number starts getting used as a de facto identity.
Ah yeah, good point. That makes more sense.
> while traditional numbers can, uh, be purchased anonymously at the corner store.
That is a closing window and the case in fewer and fewer places. It wont be long until most people would need to fly across the globe or get involved with organised crime to pull that off...
You keep using that word. I do not think it means what you think it means.
The same level of security that shitter's checkmark introduced. All checkmark accounts are fake, and the ones without are real people, I guess?
The idea that scammers don't have digital money laying around just waiting on being spent on something is so absurdly out of touch on how everything in cyber works.
Corporations are "people".
Corporations "eat" money.
Entities that can feed a corporation, are treated as peers, i.e. "people".
Thus, on shitter, if you can pay, you are a person (and get a blue checkmark).
Oh, nice allusion. If corporations eat money and you're not paying, i.e., a free service. You are prey.
You aren't even the product. You're the raw material.
I work in security and this surprised me to see. Not that these companies got hacked, but the scope of the attack being simultaneous. Coordinated. Popping multiple companies at the same time says something about the goals the PRD has.
It risks a lot of "noise" to do it this way. Why not just bribe employees to listen in on high profile targets? Why try to hit them all and create a top level response at the Presidential level?
This feels optics-driven and political. I'm not sure what it means, but it's interesting to ponder on. Attacking infrastructure is definitely the modern "cold war" of our era.
This is a total yawn, and the norm. It looks coordinated because the team who focuses specifically on telecoms had their tools burned. Pick pretty much any sector of interest and the intelligence services of the top 50 countries all have a team dedicated to hacking it. The majority of them are successful.
Sadly even most people in security are woefully unaware of the scope and scale of these operations, even within the networks they are responsible for.
The "noise" here was not from the attacker. They don't want to get caught. But sometimes mistakes happen.
Interestingly, some of those teams dedicated to hacking are either private sector or a branch that nobody has heard of. I once interviewed for a company whose pitch to me was basically "we get indemnity to hack foreign telcos" and "we develop ways to spy that nobody has thought of". That was 20 years ago
What do those companies look like externally? Are they publically known?
Some are specialized, some are diversified. Definitely public, I believe they all have to be listed on fedgov's contractor list? Some are obvious weapons contractors, some aren't (like extensions of big-name universities). If you see job listings for weapons development, cyber ops, secret-clearance software dev, cryptography, etc, that's a clue.
It probably wasn't a simultaneous attack, they probably penetrated over a long period of time. The defenders just found them all simultaneously (you find one, you go looking for the others)
> Why not just bribe employees to listen in on high profile targets?
Developing assets is complicated and difficult, attacking SS7 remotely is trivial, especially if you have multiple targets to surveil
Given the noise about huawaie and spy cranes, it would be interesting to know if the "attacks" were against any and all telecoms equipment, or just chinese stuff, not that I think it would make any difference. The daylight (heh heh!) trolling for telecom and power cables, is most definitly a (he ha!) signal, aimed at western politicians. Another one, is that while there are claims of North Korea , taking crypto, no identifiable victim has stood up. Western politicians are attempting to redirect the whole worlds economy, based on saving us from the very things that are happening, just now. So it does seem more than coincidental.
Aren't they attacks against the US government mandated backdoors in all equipment?
I think this is the perfect time to do something like this, in the midst of a presidential transition. Regardless of the outgoing and incoming politics, things will be more chaotic. While it won't be unnoticed, it's going to be down the lists of things to deal with probably, and possibly forgotten.
The most incompetent crook is the first one to get caught.
There's a huge selection bias factored into what attacks make the news.
Incompetence is just one dimension on odds of being caught.
You could be an incredibly competent and highly motivated crook and bad luck in the form of an intern looking at logs or a cleaning lady spotting you entering a building could take you down.
I can't confirm it because the descriptions of the hack are unclear but if more network operators say they've been hacked it is more and more likely the Chinese got in by attacking lawful intercept. This could happen in various ways: bribe or blackmail someone in law enforcement with access to a lawful intercept management system (LIMS), a supply chain attack on an LIMS vendor, hacking the authentication between networks and LIMS, etc.
If it is an LI attack the answer to which networks are compromised is: All of them that support automated LI.
That's a nasty attack because LI is designed to not be easily detectable because of worries about network operators knowing who is being tapped.
More likely they got access and then snooped any of the many insecure protocols used to manage network devices.
Anyone who has ever worked in networking will understand what I mean.
The networking industry is comically bad. They use ssh but never ever verify host keys, use agent forwarding, use protocols like RADIUS or SNMP which are completely insecure once you pop a single box and use the almost always global shared secret. Likewise the other protocols.
Do they use secure boot in a meaningful way? So they verify the file system? I have news for you if you think yes.
It’s kind of a joke how bad the situation is.
Twenty years ago someone discovered you could inject forged tcp resets to blow up BGP connections. What did the network industry do? Did they institute BGP over TLS? They did not. Instead they added TCP MD5 hashing (rfc: https://datatracker.ietf.org/doc/html/rfc2385 in 1999) using a shared secret because no one in networking could dream of using PKI. Still true today. If deployed at all, which it usually isn’t. 2010!!
If you want to understand the networking industry consider only this: instead of acknowledging how dumb the situation is and just using tls, instead we got this - https://datatracker.ietf.org/doc/html/rfc5925 - which is almost as dumb as 2385 and just as bad in actual deployment because they just keep using the same deployment model (the shared tuple). Not all vendors that “support” 5925 support the whole RFC.
As an aside this situation is well known. People have talked about it for literal decades. The vendors have shown little to no interest in making security better except point fixes for the kind of dumb shit they get caught on. Very few security researchers look at networking gear or only look at low end junk that doesn’t really matter.
FWIW PKI tends to mean a central point of failure. Some Russian organizations can't get TLS certificates because of sanctions.
Pki here does not mean a global CA. You can run your own CAs (and should).
Since only two parties are involved, why not use the easier pre-shared key system in that case?
For the many reasons I listed. Pre sharesd keys are almost always global and you can’t do forensics to find the leak.
You can cross trust and establish alternative trust paths in PKIs
That reasoning is dubious.
They aren't saying that more have been hacked, they are saying that more have been discovered related to that hack. Any adversary at this level would be monitoring the news, and would take appropriate actions (for gain) or roll up the network rather than allow reverse engineering of IOCs.
More than likely this was not an LI based attack, but rather they don't know for sure how they got in. Nearly all of the guidance is standard cybersecurity best practices for monitoring and visibility, and lowering attack surface with few exceptions (in the CISA guidance).
The major changes appear to be the requirements to no longer use TFTP, and the referral to the manufacturer for source of truth hashes (which have not necessarily been provided in the past). A firmware based attack for egress/ingress seems very likely.
For reference, TFTP servers are what send out the ISP configuration for endpoints in their network, the modems (customers), and that includes firmware images (which have no AAA). Additionally as far as I know the hardware involved lacks an ability to properly audit changes to these devices (by design), and TR-47 is rarely used appropriately, the related encryption is also required by law to be backward compatible with known broken encryption. There was a good conference talk on this a few years ago, at Cyphercon 6.
https://www.youtube.com/watch?v=_hk2DsCWGXs
The particular emphasis on TLS1.3 (while now standard practice) suggests that connections may be being downgraded, and the hardware/firmware at CPE bridge may be performing MITM to public sites in earlier versions transparently, if this is the case (its a common capability needed).
The emphasis on using specific DH groups, may point to breaks in the key exchange of groups not known to be broken (but are broken), which may or may not be a factor as well.
If the adversary can control, and insert malicious code into traffic on-the-fly targeting sensitive individuals who have access already, they can easily use information that passes through to break into highly sensitive systems.
The alternative theory while fringe, is maybe they've come up with a way to break feistel networks (in terms of cryptographic breaks).
Awhile back the NSA said they had a breakthrough in cryptography. If that breakthrough was related to attacks on feistel network structures (which almost all modern cryptography is built on), that might explain another way (although this is arguably wild speculation at this point). Nearly every computer has a backdoor co-processor built-in in the form of Trustzone, Management Engine, or AMD's PSP. Its largely only secured by crypto without proper audit trails.
It presents a low hanging concentrated fruit into almost every computation platform on earth, and by design, its largely not auditable or visible. Food for thought.
Quantum computer breaks a single signing key for said systems, acting like a golden key back door to everything. All the eggs in one basket. Not out of the realm of possibility at the nation state level. No visibility means no perception or ability to react, or isolate the issues except indirectly.
You don’t need to bring up quantum computers. Almost all protocols in the networking industry are basically running with a shared secret that is service global. Pop any box at all and you have the world for any traffic you can capture.
The problem with the shared secret model isn’t that it can be stolen, it’s that it is globally shared within a provider network. You can’t root it in a hardware device. You can’t do forensics to see from what node it was stolen.
We are talking about an industry where they still connect console servers, often to serial terminal aggregators that are on the internal network alongside the management Ethernet ports, which have dumb guessable passwords, often the same one on every box, that all their bottom tier overseas contractors know.
It’s just sad.
> You don't need to bring up quantum computers. Almost all protocols in the networking ...
Its true that those protocols are basically running shared secrets, but those areas all have some visibility with auditing and monitoring.
You crack a root or signing key at the co-processor level and you can effectively warp and control what anyone sees or does with almost no forensics being possible.
It fundamentally allows a malevolent entity the ability to alter what you see on the fly with no defense possible. Such is the problem with embedded vulnerabilities, its just like that NewAg train thing.
Antitrust and bricking for monopolistic benefit is far more newsworthy then say embedding a remote radio-controlled off-switch with no plausible cover, that can brick the trains as they move harvests, food stuffs, or military equipment.
Its corruption, not national security. Would many believe that its the latter over the former when it does both?
It is sad that our societal systems have become so brittle that they cannot check or effectively stop the structural defects and destructive influences within itself.
Some related prior discussion:
PRC Targeting of Commercial Telecommunications Infrastructure
https://news.ycombinator.com/item?id=42132014
AT&T, Verizon reportedly hacked to target US govt wiretapping platform
Wasn’t it a couple years ago the intelligence community was arguing for backdoor mandates, and now the FBI recommends Signal for safe chats? Such a farce. Hopefully the new admin goes through their emails and text messages over the last 4 years. Privacy for me, not for thee, I suppose…
"...implies that the attack wasn't against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers"
Yup. The attack hit the CALEA backdoor via a wiretapping outsourcing company. Which one?
* NEX-TECH: https://www.nex-tech.com/carrier/calea/
* Substentio: https://www.subsentio.com/solutions/platforms-technologies/
* Sy-Tech: https://www.sytechcorp.com/calea-lawful-intercept
Who else is in that business? There aren't that many wiretapping outsourcing companies.
Verisign used to be in this business but apparently no longer is.
Thank you for posting this. The search term "calea solutions"[1] also brings up some relevant material, such as networking companies advising how to set up interception, and an old note from the DoJ[2] grumbling about low adoption in 2004 and interesting tidbits about how the government sponsored the costs for its implementation.
[1] https://www.google.com/search?client=firefox-b-d&q=calea+sol...
where from ""...implies that the attack wasn't against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers" comes from ? from schneier ? because if you go to the actual reporting in wsj for example, it doesn't imply that attack was against TTP providers. also TTP providers are optional
WSJ: U.S. Wiretap Systems Targeted in China-Linked Hack https://www.wsj.com/tech/cybersecurity/u-s-wiretap-systems-t...
That seems pretty clear.
nope :)
wiretap systems are on the telecom provider side and it a bunch of different and in many cases ordinary networking equipment that can be easily misconfigured.
TTP (aka companies listed above) are optional and usually used by companies that don't have their own legal department to process warrants/want to deal with fine details of intercepts
> wiretapping outsourcing company
Is it a great idea to give all that info to India as well?
Nothing contradictory (in philosophy), really: they said American law enforcement should be able to break encryption when they have warrants and they now say Chinese spies should not be able to.
This is obviously technically impossible, but the desire for that end state makes a ton of sense from the IC’s perspective.
That something can simultaneously be impossible and sensible is peculiar. It almost suggests that the technique has merely not yet been figured out.
Secrets fail unsafe. Maybe an alternative doesn't.
It is sensible that people would want the impossible. It isn't sensible to try to mandate it.
Government keeps trying to mandate it in various ways. With predictably bad results.
How is it obviously technically impossible?
Whatever method is available to American law enforcement is eventually going to become available to Chinese spies. The record of keeping this kind of secret is abysmal. If by no other means, then by social engineering the same access that local police departments were supposed to have.
Salt Typhoon - which this discussion is about - is an example. Tools for tracking people that were supposed to be for our side, turn out to also be used by the Chinese. Plus the act of creating partial security often creates new security holes that can be exploited in unexpected ways.
Either you build things to be secure, or you have to assume that it will someday be broken. There is no in between.
Something either has X degree of security (for everyone) or it does not.
The FBI has a weird mandate in that it's both counter-espionage and counter-crime, and those are two quite different missions. Unsurprising to know that counter-espionage want great encryption, and counter-crime want backdoorable encryption.
You want the new anti democratic/authoritarian administration to look through the FBIs emails to find something to frame them for? You sure that's wise? Even if they don't respect privacy like they should?
It seems like every few years law enforcement puts out statements about how good encryption is for criminals, and then they have to walk it back as data breaches happen.
Sometimes you're on offense, sometimes you're on defense. The government does both.
It doesn't take much to read between the lines on those two statements. Feds have access to Signal if they want it, but are using it as filter paper against most attacks against the public etc.
The "feds" do not have access to Signal, except by CNE attacks against individual phones. Signal's security does not rely on you trusting the Signal organization.
It's ok for someone to believe that, but I don't believe that. Unfortunately there is no practical way to verify it either.
Well, if you're in a position where you can only put faith in someone else's word as to whether it's good for your needs (this is the vast majority of people), there's this: https://community.signalusers.org/t/overview-of-third-party-...
What are you talking about? Signal is open source, and its cryptographic security is trivially verifiable. If you don't trust the nonprofit behind it for whatever reason, you can simply compile it yourself.
> and its cryptographic security is trivially verifiable
That's going quite far. Even with all the details of it documented and open, there's a relatively small number of people who can actually verify that both the implementation is correct and the design is safe. Even though I can understand how it works, I wouldn't claim I can verify it in any meaningful way.
Multiple teams have done formal verification of the Signal Protocol, which won the Levchin Prize at Real World Crypto in 2017.
Sure, there are teams who have done it. But it's not trivial. The fact there's a price for it shows it's not trivial. If I choose a random developer, it's close to guaranteed they wouldn't be able to reproduce that. The chances go to 0 for a random Signal user.
Alternatively: it's trivial for people sufficiently experienced with cryptography. And that's a tiny pool of people overall.
The idea isn't that you do formal verification of the protocol every time you run it. It suffices for the protocol to be formally verified once, and then just to run that one protocol. If you thought otherwise, you might as well stop trusting AES and ChaCha20.
It is possible for the core protocol to be tightly secure, while a bug in a peripheral area of the software leads to total compromise. Weakest link, etc. One-time formal verification is only sufficient in a very narrow sense.
It is also possible for a state-level adversary to simply hijack your phone, whatever it is, and moot everything Signal does to protect your communications. Cryptographically speaking, though, Signal is more or less the most trustworthy thing we have.
Just look at PuTTY and e521 keys.
Or go back to Dual_EC_DRBG.
Unless DJB has blessed it, I'll pass.
What do those two issues have to do with each other?
These were showstopper bugs that betrayed anything they touched.
Avoiding this is obviously a huge effort.
Dual EC was a "showstopper bug"?
It did stop openssl whenever you tried to use it in production mode ;)
If you compile it yourself, can you still connect to the Signal servers?
And, even if you can connect with your own client, can you trust the server is running the code they claim it is? They were caught running proprietary server code for a time in 2020-2021. https://github.com/signalapp/Signal-Android/issues/11101#iss... / https://news.ycombinator.com/item?id=26715223
But the client is designed to not trust the server, that's why encryption is end-to-end. So does it matter?
In some sense, no - the protocol protects the contents of your messages. In another sense, yes - a compromised server is much easier to collect metadata from.
Metadata, yes. Of course, the protocols, and thus all the inconveniences of the Signal app people constantly complain about, are designed to minimize that metadata. But: yes. Contents of messages, though? No.
If Signal, the service, was designed to minimize metadata collection, then why is it so insistent on verifying each user's connection to an E.164 telephone number at registration? Even now, when we have usernames, they require us to prove a phone number which they pinky-swear they won't tell anyone. Necessary privacy tradeoff for spam prevention, they say. This isn't metadata minimization, and telephone number is a uniquely compromising piece of metadata for all but the most paranoid of users who use unique burner numbers for everything.
This is the most-frequently-asked question about Signal, it has a clear answer, the answer is privacy-preserving, and you can read it over and over and over again by typing "Signal" into the search bar at the bottom of this page.
The answer is not privacy-preserving for any sense of the word "privacy" that includes non-disclosure of a user's phone number as a legitimate privacy interest. Your threat model is valid for you, but it is not universal.
The question you posed, how Signal's identifier system minimizes metadata, has a clear answer. I'm not interested in comparative threat modeling, but rather addressing the specific objection you raised. I believe we've now disposed of it.
I don't believe there has been any such disposition in this thread. There have been vague assertions that it's been asked and answered elsewhere. Meanwhile, the Signal source code, and experience with the software, clearly demonstrates that a phone number is required to be proven for registration, and is persisted server-side for anti-spam, account recovery, and (as of a few months ago, optional) contact discovery purposes.
Yes. There's also libraries that do this, like libsignal.
It’s not practically open source though - how many people actually build it themselves and sideload onto their Android/iphone?
How much effort would it be for the US government to force Google to ship a different APK from everyone else to a single individual?
I don't know, a lot? They could with the same amount of effort just get Google to ship a backdoored operating system. Or the chipset manufacturer to embed a hardware vulnerability.
"Here's a court order, you must serve this tainted APK we built to the user at this email"
VS
"You must backdoor the operating system used on billions of devices. Nobody can know about it but we somehow made it a law that you must obey."
Come on, that's not the same amount of efforts at all.
Looks like exactly the same amount of effort to me?
Effort maybe but not likelihood of discovery
The cryptography is not where Signal is vulnerable. What Signal is running on, as in operating system and/or hardware that runs other embedded software on "hidden cores", is how the private keys can be taken.
Anything you can buy retail will for sure fuck you the user over.
Retail hardware actually has a better track record at the moment than bespoke, closed market devices. ANOM was a trap and most closed encryption schemes are hideously buggy. You're actually better off with Android and signal. If we had open baseband it would be better, but we don't, so it's not.
Perfect security isn't possible. See "reflections on trusting trust".
Bespoke but-not-really-bespoke closed-market devices made by the right people are very secure, but they are not sold to the profane (you).
> ANOM was a trap
Yes, ANOM was intended to be a trap.
> and most closed encryption schemes are hideously buggy
Yes they are. Hence some of us use open encryption schemes on our closed-market devices.
> You're actually better off with Android and signal.
I am better off with closed-market devices than I am with any retail device.
> If we had open baseband it would be better
And the ability to audit what is loaded on the handset, and the ability to reflash, etc. In the real-world all we have so far is punting this problem over to another compute board.
> Perfect security isn't possible.
Perhaps, but I was not after "perfect security", I was just after "security" and no retail device will ever give me that, but a closed-market device already has.
> See "reflections on trusting trust".
Already saw it. You're welcome to see:
- https://guix.gnu.org/blog/2020/reproducible-computations-with-guix/
- https://reproducible-builds.org
- https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-building-from-source-all-the-way-down/
Oh, so none of this has anything to do with Signal. Ok!
In theory, "none of this has anything to do with Signal", and you are correct ; but back over here in reality: Signal runs on these systems.
Hence the security afforded by Signal is very weak in-practice and questionable at best.
> Unfortunately there is no practical way to verify it either.
discuss an exceedingly clear assassination plot against the President exclusively over signal with yourself between a phone that's traceable back to you, and a burner that isn't. if the secret service pays you a visit, and that's the only way they could have come by it, then you have you answer.
I think the bar for paying such a visit would be infinitely high (they would find a way to defend in a more clandestine manner) to keep the ruse going.
Let us know how that goes
Signal's servers have access to your profile, settings, contacts, and block list if the PIN you select has low security.
Which is to say: in the worst-case plausible failure model for Signal, they get the same metadata access as all the other messengers do. OK!
Not all other messengers require a mobile phone number in order to get access, meaning not all other messengers have a view of users' social networks - some of them are anonymous, and Signal is not. It's a fundamental difference. But we've been here before.
They kill people based on metadata - they told us so. They don't need the rest.
Threema leaks no such metadata
You want to use this, by all means.
Were any/all of those vulnerabilities mitigated?
Per the link. Yes. Here the specific statement.
Lessons Learned
We believe that all of the vulnerabilities we discovered have been mitigated by Threema's recent patches. This means that, at this time, the security issues we found no longer pose any threat to Threema customers, including OnPrem instances that have been kept up-to-date. On the other hand, some of the vulnerabilities we discovered may have been present in Threema for a long time.
For what it's worth, and obviously I could have been clearer about this: what's interesting about that link is the description of Threema's design, not the specific vulnerabilities the team found.
While the statments are contradictory I wouldn't take it as sign of some vast conspiracy. I would just take it as a sign they are stuck needing to give out some kind of guidance to prevent foreign access. While they are a domestic police service they are also a counterintelligence service and thus need to provide some guidance there.
Telcos need a way to comply with court orders. That's it.
No, the feds require CALEA-backdoors. Absent CALEA, a telecom could say we don't have the data or the capability
US Military has atleast privately switched away from any Signal usage within the past few months – it’s undoubtedly compromised in some way. If the FBI is recommending it it’s for exploitative purposes & a false premise of safety.
So what’s the alternative
Completely avoiding sensitive communication over mobile phones.
Session, Matrix, Tox perhaps
I know nothing about this field so I went looking for those product names.
I believe the Session referred to is here ... https://getsession.org/
Tox is here ? https://tox.chat/
The Matrix i found seems to have been closed down earlier this month ... https://en.m.wikipedia.org/wiki/Matrix_(app) ... that's assuming I found the correct "matrix".
If it matters to you don't take my word for those being the correct points of contact, that's just me searching for two minutes.
As a side rant, I wish people would choose less generic names for their projects, calling something "session" ? You might as well call it "thing".
This is probably the Matrix they meant: https://matrix.org/
Thanks, that does seem more plausible than the one I found.
SimpleX?
This is why we need device to device encryption on top of all the security that a telco has. There is no excuse for any connection I make being unencrypted at any point except the receiver.
While you aren't wrong about needing end to end encryption, that would not have helped here. What China was after was meta data (who is communicating with who), which is a completely different problem to solve.
the articles i saw said they could record phone calls at will
Yes, but not by man-in-the-middle attacks between the device and the network. There are systems internal to the provider that let you listen to any call.
Because the US government forces them to have these systems and to not encrypt the calls. There should be more attention on the fact that, essentially, the US government hacked US telecoms for China's benefit.
Let's not overstate it. The US government hacks telecom for the benefit of the US government. Now having said that, as someone above mentioned, the intelligence agencies of the top 50 national governments are obviously all keen to use those hacks for their own benefit. And the flip side of that is that the US government is very interested in stopping these other national governments from succeeding.
Clearly, the counter-intel part of the US government effort has been less successful than the surveillance and intelligence gathering effort. But that doesn't mean that the US government wants all those other nations to be able to gather data from these systems. Our government wants nothing more than to be the only national government capable of gathering data from these systems.
Make your phone calls with Signal and you don't have this problem. So far the US government isn't forcing anyone to use unencrypted calling.
The hard part is making all the other people you need to regularly talk to use Signal.
If getting people to download free apps and sign in to them with their phone number were hard, most of HN wouldn’t exist.
Getting people to download free apps is easy.
Getting them to actually use them is hard, especially when the whole point of the app is to communicate with other people, and literally none of the people they regularly communicate with other than yourself use (or even know about) Signal.
Since the 80s you can spy on anyone's calls using the telco's standard maintenance features. You dial up a number, you then dial another number, and you're basically patched in to the second number, can listen in on any current call. There was a different system required by the government for taps, but linemen have their own method so they can diagnose issues. At least that used to be the case through the 2010s.
Stupidity and banality is a far greater threat than conspiracy.
Well obviously there is a good excuse, that users do not want to and cannot generally deal with key management. Even dealing with phone numbers is a hassle, and now you want to add a public key on top? One which cannot easily be written down, and is presumably tied to the handset so if you lose and replace your phone you stop being able to receive all phone calls until you manually somehow distribute your new key to everyone else?
End to end encryption has proven to be unworkable in every context it's been tried. There are no end-to-end encrypted systems in the world today that have any use, and in fact the term has been repurposed by the tech industry to mean pseudo encrypted, where the encryption is done using software that is also controlled by the adversary, making it meaningless. But as nobody was doing real end-to-end encryption anyway, the engineers behind that decision can perhaps be forgiven for it.
> pseudo encrypted, where the encryption is done using software that is also controlled by the adversary
I'd say there's a very real use for this, though, which is that with mobile applications it's more complicated to compromise a software deployment chain than it is to compromise a server-side system. If you're a state-level attacker and you want to coordinate a deployment of listening capabilities on Signal, say, you need to persistently compromise Signal's software supply chain and/or build systems, and do so in advance of other attacks you might want to coordinate with, because you need to wait for an entire App Store review cycle for your code to propagate to devices. The moment someone notices (say, a security researcher MITM'ing themselves) that traffic doesn't match the Signal protocol, your existence has been revealed. Whereas for the telcos in question, it seems it was possible to just compromise a server-side system to gain persistent listening capabilities, which could happen silently.
Now, this can and should be a lot better, if, say, the Signal app was built not by Signal but by Apple and Google themselves, on build servers that provably create and release reproducible builds straight from a GitHub commit. It would remove the ability for Signal to be compromised in a non-community-auditable way. But even without this, it's a nontrivial amount of defense-in-depth.
Yes it's not useless and can help mitigate insider threats, but that isn't how it's presented by the messaging companies.
You can just force Google/Apple to roll out compromised versions to selected users and force them to keep their mouth shut about it.
Your comment concerns the situation where the state level attacker is the US.
As the article points out, there are many other adversaries to be concerned about. Protecting against them would be good. Don’t give up so quickly.
Aside - not the main point —>
I actually do not know if we are at the level of “forced speech” in the US. Publishing hacked apps would fall under that category. Forced silence is something and less powerful. Still bad, obviously.
Apple Facetime is painless enough. It can't mitigate targeted government espionage but it raises the bar from mass collection of plaintext.
The US Treasury just announced they had an incursion by Chinese threat actors. Their "cyber security vendor" had a remote access key compromised, enabling the attackers access to endpoints within Treasury.
AFAIK this would not be news for EU telecoms : they are being operated by Chinese companies, so those have permanent access to nearly everything anyway.
https://berthub.eu/articles/posts/5g-elephant-in-the-room/
So is that not the case for USA telecoms ?
Well at least American telecoms are fighting them. The European MO is to not only let themselves be conquered, but they actually pay China to do it. Thankfully American online services are on Europe's side, and work harder than anyone to protect their communications. These services don't even charge Europe anything, and Europe rewards them with billions of dollars of fines for doing it. Europe also defaced our websites in an effort to tax the attention economy, and removed legal protections for open source developers.
> fighting them
That's amusing. I'll grant that US companies haven't outright surrendered, and are still at least permitted to engage in lip service on the issue. But actual "fighting"? That would mean a tech world that looks very different than what we have today, and would fatally conflict with no end of "interests" in the US.
> American online services are on Europe's side, and work harder than anyone to protect their communications
Yeah sure, except giving the NSA access and complying with the CLOUD Act.
> capability to geolocate millions of individuals
I guess Starlink could easily geolocate every 4G/5G phone IMIE with huge direct-to-celll attennas
Modern mobile phone protocols do not expose your IMEI encrypted, they have a multi-step process in which temporary identifiers are used to identify the device to most of the network. So this is not necessarily the case.
even with SS7 ?
Happy new year!
SS7 only gets into the picture after the handset has connected to the home network, from what I understand (n.b. not a telco engineer). The IMEI is exposed to the network, but only to your network and only after the handset sets up an encrypted and authenticated connection with it.
5G uses a thing called a GUTI to identify handsets, not an IMEI. Think of it like a GUTI being a temporary IPv6 address allocated for a few hours by DHCP, and the IMEI being like a browser cookie. IMEI is exposed to your home network and networks you roam onto, but merely being in range of a tower doesn't expose it, and it's never transmitted in the clear over the air.
Also, within a network most of the components don't get access to the IMEI either.
Last time I saw SS7 in production about a decade ago. Which operator uses SS7 today?
All? But it's something internal to networks and between networks, not between a network and a user device, so I don't see the relevance to IMEI catchers which intercept the radio link.
Answer delayed by hours due to HN rate limiting.
> All?
None? As I said I have not seen SS7 for a decade+ in USA/Canada. IMEI catches has nothing to do with SS7.
You may not have seen it, but do you care to explain this Veritasium video of 3 months ago where they specifically gain (not entirely legal) access to the SS7 network to hack Linus Sebastian's phone?
https://www.youtube.com/watch?v=wVyu7NB7W6Y
Are you saying the SS7 messages they're looking at of a Canadian telephone subscriber just aren't there?
And this is the EFF saying in July 2024 that the FCC should really make telcos address vulnerabilities in SS7:
https://www.eff.org/deeplinks/2024/07/eff-fcc-ss7-vulnerable...
Are you saying they're just wrong, those SS7 networks don't exist in the USA?
I mean, the article links the FCC request-for-comment on SS7 networks. Just as a quote: https://docs.fcc.gov/public/attachments/DA-24-308A1.pdf
The Signaling System 7 (SS7) and Diameter protocols play a critical role in
U.S. telecommunications infrastructure supporting fixed and mobile service
providers in processing and routing calls and text messages between networks,
enabling interconnection between fixed and mobile networks, and providing call
session information such as Caller ID and billing data for circuit switched
infrastructure. Over the last several years, numerous reports have called
attention to security vulnerabilities present within SS7 networks and suggest
that attackers target SS7 to obtain subscribers’ location information.
This is dated March 2024. It's talking about the very thing you say you haven't seen for more than a decade. To me, it sounds like that thing (the SS7 network) is alive and well in the USA, and the federal government is concerned about its lax security allowing spies to discover phone users' location information - the very topic we're discussing.It sounds like you're talking mince.
Key word here is '_and_'. Yes, I have not seen SS7 in a decade. On over hand Diameter is widely used everywhere.
You just sound like an unreliable witness.
If your claim is that there is literally no SS7 in US and Canadian telephone networks, then that is straight-up wrong. It exists in every network that still supports 2G/3G wireless protocols and classic PSTN standards. It was replaced in 4G/5G and SIP, but that requires your operator only supports those protocols and doesn't continue to support the old protocols. If it does, it will still have SS7 signalling and will still be susceptible to attacks (though it is free to run its own security to block them).
If your claim is that you haven't seen SS7 in a decade, then sure, maybe you haven't. But given there is actual, ongoing spying, impersonation, etc., that can be demonstrated in North America in 2024, and everyone involved says "it's due to SS7", and you're out here saying it's-so-rare-you-haven't-seen-in-a-decade, then what exactly is happening? What are the hackers using then, when the experts say they're exploiting SS7, if you insist it's not there?
Why did the GSMA publish this security paper in 2019? https://www.gsma.com/solutions-and-impact/technologies/secur...
Why are they promoting a Code of Conduct for GT lessees? https://www.gsma.com/solutions-and-impact/technologies/secur...
That attack demonstrated on Linus channel, while it IS about SS7 I doubt it had SS7 interfaced in USA/Canada. Important details were left in that demo, while some hints were given. SS7 is definitely a thing in some countries though. Linus channel demonstrated attack is not a direct one, but rather trickery, in way similar to domains 'apple.com' and 'аррle.com'.
I directly ask you: do you think there is at least one SS7 network in the USA or Canada, yes or no?
If you claim there are no SS7 networks in the USA or Canada, please explain:
1) why the FCC believes they exist and need to be secured, as per their March 2024 note
2) what the UMTS networks, still operational in Canada, are using for messaging (note the 2025 dates in https://en.wikipedia.org/wiki/3G#Phase-out for Canada; 2G/3G is still alive and well there. And I note that most of the 3G phase out in the USA was in 2022, not in 2014 which is what they'd have to be for you to not have seen SS7 for a decade)
3) what the POTS networks, still operational in the USA and Canada, are using for messaging (noting that FCC 19-72 only removes the requirement on ILECs to provide UME Analog Loops to CLECs, and does not require them to shut down POTS networks entirely by August 2022. For example, AT&T only plans to have transitioned 50% of its POTS network by 2025)
https://www.he360.com/ Has been doing this for a while.
Oh wow! I wonder how well it works in a crowded urban environment as opposed to the less crowded areas their examples of poachers and illegal fishing vessels operate in?
Poachers and illegal fishing vessels are better PR than foreign dissidents. :9
The federal government wouldn't pay hundreds of millions of dollars[0] to catch one or two fishing boats.
[0] https://www.usaspending.gov/award/CONT_AWD_N6600122C0065_970...
> "We detect no activity by nation-state actors in our networks at this time," an AT&T spokesperson said.
Sounds like the root of the issue.
I was working in telecom research company where the director looked my in disbelief that hacks can actually in telecom, his eyes became wide just when I showed few small hacks, wonder what he is thinking now, lol
If I’ve learned anything about security it’s that once someone has admin access there’s no way your system is clean. It might look that way, but the system is lying to you, and even if you clean that part up there’s backdoors and Trojans just waiting in firmware, boot loaders, network stacks, backups, everything. Like does your system have any “workarounds” or can you wipe everything and redeploy? Guarantee it’s the former. Ok well then how do you know this bespoke thing is what was originally written by that guy five years ago.
How is this not an act of war? If they sent people physically over to do this it would be, so how is this different?
Better security is smaller nodes or value and more of them. But it’s more profitable to say screw others security and monopolize everything
>This public-private effort aims to put in place minimum cybersecurity
Nice, we do not what the CEOs of these telcos have to give up their bonuses. So we force them to do the just bare minimum. Isn't capitalism great.
Minimum is not “bare minimum”. The alternative to minimum requirements is no requirements.
Not allowing foreign entities to spy on their customers feels like the bare minimum to me.
> So we force them to do the just bare minimum. Isn't capitalism great
This has nothing to do with capitalism. The Soviet Union wasn’t a paragon of information security.
It does, at least with respect to how the US does capitalism.
The goal is to make the number at the bottom of the piece of paper bigger by a large enough margin in the next ninety days. If you can prove that there's the imminent risk of a specific cyberattack in the next 90 days and that it will have an adverse impact on getting that number bigger, fine, company leadership will pay attention, but that's rarely the case. Most cyberattacks are obviously clandestine in nature, and by the time they're found, the move isn't to harden infrastructure against known unknowns, but to reduce legal exposure and financial liability for leaving infrastructure unsecured. It's cheaper, and makes the number at the bottom of the piece of paper bigger.
>The goal is to make the number at the bottom of the piece of paper bigger by a large enough margin in the next ninety days. If you can prove that there's the imminent risk of a specific cyberattack in the next 90 days and that it will have an adverse impact on getting that number bigger, fine, company leadership will pay attention, but that's rarely the case.
1. Capitalists seem pretty content with money losing ventures for far more than "the next ninety days", as long as they think it'll bring them future profits. Amazon and Uber are famous examples.
2. You think the government (or whatever the capitalism alternative is) aren't under the same pressure? Unless we live in a post scarcity economy, there's always going to be a beancounter looking at the balance sheet and scrutinizing expenses.
I’m pretty sure that the Soviet Union was state capitalist.
"true communism has never been tried"
Well it hasn't. No one knows exactly what communism is, but they're pretty sure it's not a dictatorship.
My guy/gal, state capitalism as a transition towards socialism and then to communism was an explicit Marxist policy by the Soviet Union. Hence that state (of state capitalism) was a part of the big-C Communism of the Soviet Union.
Sometimes thought-terminating quips are not enough.
Imagine if the calls were E2E encrypted, phone accounts were anonymous, there were no identifiers like IMEI, and phone companies didn't detect and record geolocation... this attack would be much harder.
How is this not an act of war?
This feels like the perfect time for two outcomes: Ripley's solution, and deploy clean slate IPv6.
Can you elaborate? The first I assume is “take off and nuke the site from orbit”, per Aliens (1986). What are you advocating for with IPv6? Increasing the enumeration space for IP addresses from 32 bits to /64 prefixes?
I'm really just advocating for a drop in replacement. You wouldn't redeploy the addressing architecture you have, instead disrupt the surface the salt gets into. If you did a drop in why not go the whole hog and make it a 6 fabric?
But, a drop-in replacement of what? SS7? Diameter? Chinese cellular base stations from Huawei etc.? The collective telco IT infra and the shoddy security practices (or lack thereof)?
"Yes"
The addressing architecture isn't the problem though?
But if you think it is, I encourage you to run Yggdrasil.
No, it's true the addressing architecture isn't the problem. The uncertainty fear and doubt over all the deployed equipment is the problem. Hence "take off and nuke it from orbit" -do a complete replacement. IFF you buy that, why would you replace like (v4) with like? Why not replace with auto addressed self deploying v6 (for instance) which some people have been advocating for at scale, rather than a bundle of assumptions in dhcpv6 static assignment? Sure a lot of things would be static, but you could simplify the NMS burdens massively and reduce your routing complexity in the IGP.
It's not a totally silly suggestion and it's not totally sensible either. Light hearted. I doubt any exec in any telco outside of Jio or maybe Comcast would go there. Amongst other things, they'd destroy a lot of capital value doing the Ripley. Well.. the liberated v4 sell replaces some of that until the price crashes..
So this is obviously the intelligence agencies cleaning data before Trump takes control right
War with China is starting to seem increasingly likely, we need to seriously prepare our industry now to manufacture things again and stop giving them our technology.
The NSA/CIA need to start making systems more secure by default and stop thinking spying on their own populations is a top priority.
China-nexus threat actors tend to be focused on espionage, including intellectual property theft. "Prepositioning" is a more recent observation, but it doesn't mean a war is inevitable. While it would be useful in that scenario, in others it may act only as a deterrent. Everyone should hope a war does not occur.
The NSA and CIA are neither able nor authorized to defend all privately-owned critical infrastructure. While concerns about agency oversight are warranted, I can assure you that spying on the population is not their top priority. It's abundantly clear that foreign threats aren't confined to their own geographies and networks. That can't be addressed without having the capability to look inward.
Secure by Design is an initiative led by CISA, which frequently shares guidance and threat reporting from the NSA and their partners. Unfortunately, they also can't unilaterally secure the private sector overnight.
These are difficult problems. Critical infrastructure owners and operators need to rise to the challenge we face.
The NSA/CIA need to start paying higher salaries to encourage more talent to go into the government sector. I remember in undergrad we had an NSA recruiter come talk to our computer science class. After the discussion, I was able to chat them up on the side and they mentioned salary being the hardest problem with recruiting top talent. Big tech pays too much and government not enough. Where would you go when you graduate?
Do they pay too little or have big tech monopolies distorted the market with their firehoses of cash? Bit of both?
What war?
The digital has been running for quite a while, and there won't be a real one. China has nothing to gain from starting one. I mean seriously...why would you shoot your customer?
> I mean seriously...why would you shoot your customer?
It depends on your goal. If it is strictly a commercial relationship, “shooting your customer” could be advantageous for preserving a revenue stream. Customer lock-in Could be seen as a form of “shooting your customer"
If your goal is political, "shooting your customer" may enable a regime change that is friendlier to you. We have done this multiple times in the Middle East, Central America, and South America.
The difference is, China has more have-nots than the US has people. The US is the main source of value creation for China. If Xi wants to not have a coup and be beh... I mean, if Xi wants to guarantee the future prosperity of the PRC, he needs to raise those have-nots out of poverty and the way to do that is by selling stuff to Americans and stealing their IP, not creating a shooting war with a country that has enough nuclear weapons to make this planet uninhabitable to intelligent life for centuries.
The US has done what it has done in the regions you list because they're already unstable (particularly the Middle East) and have no way of striking decisive blows against US territory.
The way to do that is to actually have stronger consumption in China, not antagonize the US.
We can never trust them again.
We must implement as LAW that a SIM card can provide and only provide a Zero Knowledge Proof of "this SIM is valid for this cellular/data plan up to a specific date".
If they want to track us all the time, whatever, if they can't keep that data safe from the Chinese Communist Party, then they aren't competent enough to have it.
"We must implement as LAW that a SIM card can provide and only provide a Zero Knowledge Proof ..."
Now is a good time to remind everyone that a SIM card is a full blown computer with CPU, RAM and NV storage.
Further, your carrier can upload and execute code on your SIM card without your knowledge or the knowledge of the higher level application processor functions of your telephone.
Is there any sandboxing to prevent access from the SIM card computer to information on your phone? And if so, absent of some (admittedly not very unlikely) 0day allowing sandbox escape, what would a malicious SIM program be able to do?
Yes, the card is a peripheral device to the phone - a hardware security key. It can't steal all your data for the same reason your Yubikey can't.
Answer delayed by hours due to HN rate limiting.
Basically this.
And, hopefully your USB stack, or your phone's equivalent to SIM interface, doesn't have vulnerabilities that the small computer that is the SIM card could exploit.
Operating systems that center their efforts on protecting high risk users like Qubes dedicated a whole copy of Linux running in a Xen VM to interface with USB devices.
It'd be great if more information were available on how devices like Google's Pixel devices harden the interface for SIM cards.
Luckily e-sims are becoming more common.
Unluckily because they can only be issued by registered and licensed members of the GSM alliance, IIRC.
I can't believe the CPC would do this- add a backdoor to American technology for American agencies.
but that would be illegal and therefore impossibru /s
>and only provide a Zero Knowledge Proof of "this SIM is valid for this cellular/data plan up to a specific date".
How do you implement bandwidth quotas with this?
With a zero knowledge proof of the service type. With client side managed and generated temporary IDs.
They need to release all the metadata for Jefferey Epstein et al. Clearly the U.S. government isn't going to after 20 years of lies and deceit.
The people involved in this have all the reason to blame China or Chinese backed groups for this, but has there been any actual evidence released that confirms this? Attribution is notoriously difficult and the only thing the public has to go on is the word of people involved.
Yet when one reads these articles it's just, "China, China, China!!!"
Anyone have a link to actual evidence?
Usually if North Korea or Russia did it, they say North Korea or Russia did it.
Honestly, it feels like they just pick a nation based on the current narrative. They already have plenty to bash Russia with regarding the Ukraine war, and they need to keep sinophobia alive and kicking, hence China.
Plainly I have no real evidence for this, other than the constant lack of evidence for their claims, and the doubts that are cast within the infosec community when data is available.
Since OP asked for evidence, maybe we should ask for the evidence that backs your hypothesis that bad reporting about China = unsubstantiated sinophobia
we've always been at war with eurasia
Unfortunately much of the West seems to have mistaken 1984 for a manual, rather than a cautionary work of fiction.
Many times in the past, a piece of malware developed by one group has been co-opted by another group. You see a virus like Stuxnet or Mirai that's working well, you just replace the payload, or switch the command-and-control code over to yourself. Then you launch an attack, but the weapon has someone else's fingerprints all over it.
As such, even if Xi Jinping himself had stood up at the UN and claimed responsibility for a particular Windows kernel-mode rootkit, that still wouldn't be incontrovertible evidence.