Just commented this elsewhere but my takes on cybersecurity today: Its about to blow up in high demand with so many skiddies being able to hack anybody with an LLM. We are seeing an increase in websites, systems and companies being compromised at an alarming rate. I suspect one of these days we will see a headline of a compromise that will shock and horrify us all. Anyone sleeping on cyber security is a ticking timebomb.
Honestly, if you wanted to make a YC company today that targets AI in a meaningdful way, I'd say make it focused on cyber security analysis. ;)
I am building in cybersec space. I dont think you even need script kiddies now. Internal employees run dangerous bad ops with AI that itself is a cybersec nightmare.
Whenever I tell people I work in computer security, their first question is "are you worried about AI taking your job"? To which I just laugh and respond "AI is job security"
It really is! AI will only help you if anything, you aren't worried about AI giving you bad code, just bad answers, which you would validate anyway. I think the other area where AI could be interesting, and I don't hear much buzz about it is, during outages, if it can query all online systems and logs in your cloud, it could probably triage it faster than an entire outage team could in theory anyway. Surprised nobodys built such a system yet. ;)
With Claude writing so much of the software in big companies, Anthropic is well-positioned to eat up SAST, DAST and a lot of the supply chain analysis. EDR and proactive security are still going to be massive businesses, however.
"Show me the incentives, and I'll show you the outcomes." - Charlie Munger
Right now, if you have a security breach, at least in the US, you send out a letter telling the person that their data could be God-knows-where and offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless. If they've got absolutely, positively nothing better to do with their time, I guess you could file a lawsuit. Who knows what the outcome would be. Probably not in their favor.
In other words, it's cheaper for them to overwork the InfoSec guys/gals and barely care about what is happening outside of day-to-day operations, than it is to really secure their stuff. So they don't spend that money.
If you saw corporate valuation-cratering fines being implemented - the kind that would end the c-suite's careers and bring shame to their family lines for seven generations - I bet that they'd start catering lunches for the InfoSec team.
New idea: AI tool to help generate legal letters to companies after they leak data to cause them maximum inconvenience.
The human speed legal system would become collateral damage.
You could also create an AI tool to help generate letters to lawmakers about how they need to make a real dent in this between reruns of Matlock in the retirement home.
> offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless
It's generally actively harmful, and the CRAs fight for this business from breaches because universally, to accept the free credit monitoring you have to sign up for their highest tier credit monitoring package (which can be up to $50/month), supply a credit card, and then hope to remember, a year later, to cancel at the end of the free period, because at that point they'll convert you to a paying customer.
I don't think fines are enough of an incentive. They're too easy to evade and insufficiently consequential to the people who are actually shipping code. Moreover, making them enormous (as you put it well "valuation-cratering") unfairly punishes people who are not directly responsible for the failure. Instead, like in other engineering disciplines, Engineers need to be personally liable for the consequences of failure. Not necessarily every engineer--not every mechanical engineer needs to be a P.E.--but someone directly responsible for the quality of the work needs to stake their reputation on it, and suffer the consequences when it fails.
In practice this would mean that you need to show conformance to some kind of security process. The actual outcome of that process is of secondary importance as long as you can show that you’re compliant. Very carefully written process documents _can_ improve things, but my confidence in security processes is low for companies without intrinsic motivation.
I think one can reasonably argue that sufficiently large fines that don’t have a „but we followed iso-xyz“ loophole could produce better outcomes. The difficult part is making the companies care about existential tail risks.
Companies are already following a bunch of standards like SOX, SOC2, HIPAA, etc., and documenting their adherence to checking all of the boxes, but incidents still happen every week.
Yes, it'll generate a lot of super annoying paperwork. But, hopefully, it will also tighten up software engineering standards. It has worked well in other disciplines.
There already are areas where such standards exist, eg safety critical applications in aviation. Arguably the defect rate there _is_ lower, but I still think that this method for achieving this is quite inefficient. And I think that writing aviation software that doesn’t crash is a lot easier to define a process for than for writing software that is difficult to hack.
The missing piece is the requirement for a certified Professional Engineer to sign off on the system. That decouples the incentives from the corporate objectives, and makes it personal. We need that kind of professional accountability in software, otherwise it'll continue to be bad.
It is my understanding that personal responsibility already exists in safety critical software development.
Yep. I had a chance to go for a cybersecurity degree. And every time ive looked at that, the career path is basically an applied insurance job.
Cybersecurity does not make money. They do not raise the profit for a company. Instead, they are compliance, contractual, and legal defences to repel lawsuits and keep data boundaries clean.
And who's the first to go? Groups that dont make money. Like cybersec.