Ben Thompson’s Stratechery article yesterday ended with a take on AI’s effect on security. I strongly disagree with him. Disagreement is an opportunity to learn, so I’ll put my thoughts here, and we’ll see who’s closer to right.
Let’s start with some context. Ben’s nominally discussing the Axios supply chain hack. Basically, the maintainer for a npm package with hundreds of millions of downloads had their GitHub credentials compromised. The attacker then added a malicious dependency that was essentially a remote access trojan. Huge deal. All over the security news. If you’re interested, Microsoft has a good post with more details. Also, it’s worth noting, as Ben does, that these supply chain attacks have been becoming more serious for some time now.
Ok, with that out of the way here’s Ben’s take, which focuses on the role that AI may play in similar incidents, emphasis mine:
What I also believe, however, is that (1) vibe-coding is going to lead to a lot of security issues in the near term, but (2) AI is going to lead to fewer security issues in the long-term. The fact of the matter is security is nearly impossible. There are so many potential or already-existent bugs in basically all software, for one, and even the most thoughtful security implementations end up having vulnerabilities that no one expected. And, on top of that, there is the fact that humans — like AI at times, to be fair! — are lazy and seek convenience, and security is almost always at odds with convenience. The axios incident demonstrates all of these issues.
AI is the answer to all of this. The truth is that all code needs to be examined for bugs, not just new code; AI is going to provide a way to examine everything that has ever been released (and yes, in the short-term, this is going to manifest as a host of new exploitable vulnerabilities). AI can also examine an entire dependency tree, which almost no human will. AI can navigate an extremely inconvenient but highly secure workflow, and can stress test every aspect of that flow — repeatedly — in a way no human can.
Ben’s making an understandable mistake. Put simply, security is more of a human problem than it is a technical problem. It’s not about the code.1 It’s about the people, and it’s an adversarial challenge with AI on both sides. Because of this, my predictions are rather different.
Vibe-coding will lead to more security issues in the near term, but it won’t be as impactful as many think it will. Vibe-coding isn’t replacing professional software so much as creating a solution to problems that were previously too small to justify spending time on. Sure, some of these programs will result in security incidents, but we’re not going to see many high-profile incidents that are a result of bad vibe-coding. Part of the security risk calculation is the value of the assets being protected, and I don’t think vibe-coding will be the default approach as the value of the artifacts involved increases. I think Ben over-estimates the effect of vibe-coding on security, at least to the extent that his prediction is measurable.
Let’s skip the emphasized bit for a moment and focus on something that I think Ben got right for the wrong reasons. Security is impossible because people simply do not understand how computers work not because creating perfect, flawless code is impossible. Education is so ineffective that the security community has effectively given up on educating the general public as a primary solution. Now, the goal is to make it impossible for the average computer user to fuck things up through their own ignorance. This is why passkeys are better than passwords. People are the problem. Getting better at the coding part may help, but it’s not addressing the actual problem.
Ok, time for the big difference of opinion: AI will lead to more security issues in the long-term, and there are at least three reasons to believe this to be true. First, most security incidents involve a person doing something dumb or being confused or being tricked. Again, people are the problem, and this is the common denominator, not a coding failure. As AI gets better at tricking people, we will see more security incidents. The Axios hack is a good example in that the maintainer admits the failure was a social engineering failure that likely would have resulted in compromise regardless of his particular approach to credentials and authentication at the time.
Second, as AI gets better, we will simply have more software, which means more security incidents. Imagine AI didn’t take off or that it wasn’t good at coding. In that world, we still had a ton of software that we wanted to create but couldn’t because we were limited in the number of trained software engineers. If AI makes all our software engineers even 10% more efficient in creating software, then we’re going to have more software to fill this latent demand. It remains to be seen just how much more efficient AI can make software engineers, but the better it is, the more software we will have. And the more security incidents we will have.
Third, even in the best case scenario, AI is going to be available to both attackers and defenders. Normal people aren’t going to be interested in the restrictions that would be necessary to disrupt this dynamic. That would require air gaps, moving back to fully paper-based processes for everything, or something else equally extreme.
If AI is available to both parties, then it would be surprising if it favored defense in an information security context. So far, the only thing that actually works in favor of defense is the mathematics behind encryption. But even there, the problem is the human being. Furthermore, we have reason to believe that AI may be much better at attacking than defending. Imagine a script kiddie with functional knowledge of every kind of hack and the ability to work 24/7 without so much as a bathroom break. Remember that attackers only have to be successful once, but defenders have to be successful every time they are attacked.
All of this is speculative, predict-the-future stuff, and it’s worth being a bit humble here. I disagree with Ben, but this isn’t a trivial, easily predicted thing. We could also both be wrong. AI may prove to be more disruptive than anything else, making a lot of the things we may want or need to measure to see who was more correct either irrelevant when we’re thinking about security. We may already be seeing examples of this in the ways that AI has changed warfare through the use of drone technologies. Although I would like to think that’s an example that proves my point because it makes current approaches to ensuring security much less effective, the reality is that we don’t know how this dynamic will ultimately affect the long term picture yet.
-
This is also true for software engineering. Just getting better at writing code is helpful, but it’s helpful in the same way that typing 150 words per minute is helpful. Marginal at best. ↩
Six Lines