Six Lines

Misunderstanding Encryption

Posted by Aaron Massey on 02 Apr 2017.

In the ongoing saga of just how hard computer security is, I recently read a piece on Engadget about the effect of Let’s Encrypt, a free certificate authority trusted by all major web browser vendors by default, on malicious sites that peddle malware, perform phishing attacks, and other heinous things. Here’s the premise of the article:

But like so, so painfully many great ideas from the tech sector, Let’s Encrypt apparently wasn’t built with abuse in mind. And so – of course – that’s exactly what’s happening.

Because it’s now free and easy to add HTTPS to your site, criminals who exploit trusting internet users think Let’s Encrypt is pretty groovy. When a site has HTTPS, not only do users know they can trust they’re on an encrypted connection, but browsers like Google’s Chrome display an eye-catching little green padlock and the word “Secure” in the address bar. What’s more, privacy and security advocates, from the EFF and Mozilla (who founded it) to little people like yours truly, have done everything possible to push people to seek these out as a signifier that a website is safe…

Let’s Encrypt knew about this before they deployed it. To say that they didn’t think about abuse is ludicrous. The people involved in creating Let’s Encrypt are arguably some of the best in the world at thinking about novel ways technology might be abused. They knew nefarious individuals and organizations would sign up for certificates, and we will talk about why they didn’t think this was their problem in a moment.

First though, we have to address the actual problem here: explaining what security measures a certificate provides to a non-technical person is extremely difficult. When organizations like the EFF and Mozilla push people to seek out the green padlock because it is safe, they are telling the truth, but not the whole truth. The padlock does make a site safer in important ways, but it doesn’t make a site completely benign.

Every safety story will be limited. No product can guarantee that it will never harm a user in any way. Consider low- or no-VOC paints. Pregnancy resource centers will tell you that you can paint the new room for the baby using low- or no-VOC paints while pregnant without harming the baby. But that doesn’t make no-VOC paint safe in all possible circumstances. What if you spill the paint, slip on it, and fall down the stairs? What if an intruder breaks into your home and drowns you in the paint? Won’t those things hurt the baby? Didn’t these pregnancy resource centers think of those possibilities?

The difficulty with encryption is that laypeople cannot separate extreme cases, like the painting scenarios I pointed out above, from common cases. That is, they literally think that a technology designed to secure communications between your computer and a website’s server should also be able to warn you of completely unrelated security risks. This is similar to believing that your no-VOC paint should prevent someone breaking into your house to drown you in it.

The reason a valid certificate resulting in indications from a web browser, like the green padlock, is safer is limited exclusively to two key things. First, you know that no one can overhear your conversation with the server. Second, you know that any subsequent communications with a server using that certificate are follow-on conversations with the same server. The first of these is relatively simple to understand. No eavesdropping is an easy analogy to make. The second one is much harder to explain, and it’s the source of the problem.

This second feature is called authentication, and it is sometimes described with phrases like “knowing who you’re talking to” or “the server is who they claim to be.” Both of these phrases are correct in some ways and misleading in others that are difficult to describe. I’ll give it a shot, but ultimately, it may be best to think of authentication as not much of a benefit simply because in this case it applies to machines rather than people or organizations. Imagine you’re trying to sell some sort of valuable stolen goods. Authentication is reaching out to a potential buyer and agreeing on a sign, like wearing a green hat, so that you would recognize one another when you meet. Then, when you meet, you would know that this was the person you reached out to. Authentication is like that, except mathematically perfect. The “green hat” in this case can never be misinterpreted. That’s it. That’s the only protection it provides. The buyer you reached out to could still be a cop, the FBI, or simply just a criminal that wants to rip you off. So there’s a benefit there, and you should still do it, but it’s not nearly enough to ensure you’re protected.

A bit later in the article, the author continues:

So why can’t Let’s Encrypt just revoke what are obviously fake PayPal certificates?

Because they believe it’s just not their problem.

Respected security and privacy experts understood that the creation of fake sites with valid certificates is inevitable. The reasoning is simple: two separate concepts (authorization and authentication) are often mistaken as a single concept. In reality, authorization is different from authentication. Authorization refers to whether or not a known person should be allowed to do something. Authentication, as we discussed above refers to determining whether someone is who they claim to be.

The difference between these concepts appears subtle because in interpersonal situations they almost completely align with one another. That is, what we’re allowed to do in interpersonal relationships aligns almost entirely with who we are. My mom doesn’t need to setup a secret authentication protocol to recognize me. She just has to look at me. It’s so simple, so easy, and so effective that she is never really aware of the fact that she’s doing this. We all do this without a second thought. But authorization and authentication are actually different things, a fact that was exploited effectively in the Mission Impossible series.

So, maybe you believe these concepts are different, but that’s not enough to explain why Let’s Encrypt can’t simply revoke “obviously fake PayPal certificates.” The problem here is that digital authentication is profoundly difficult for four reasons.

First, we don’t all agree on what is acceptable or unacceptable. How closely can a competitor style their products with a market leader before they cross the line into unfair or deceptive practice? Determining what makes a legitimate business isn’t always obvious. Should gambling sites be allowed to get certificates from Let’s Encrypt? In some jurisdictions, that would be legal. In others it wouldn’t because those societies believe gambling is destructive. Similar things can be said about many topics, some of which are deeply respected in some circles and deeply offensive in others. What should a global certificate authority do?

Second, even when we can agree, we’re not going to be right all the time. We’re going to let some not-so-obvious fake PayPal sites through, and we’re going to deny some legitimate sites simply because we think they are fake.

Third, even if we agree and we’re willing to accept that we will make mistakes from time to time, accepting the responsibility to perform this filtering is both expensive and could have legal consequences. It transforms an elegant, impersonal technical solution into a messy, social one. Let’s Encrypt would have to setup an application process, screen potential sites to see if they met the criteria, and accept responsibility for mistakes they made. The could be sued by legitimate organizations who believe they have been unfairly denied a certificate, and they could be sued by end-users who were victimized by sites that were mistakenly allowed certificates.

Fourth, even if the people behind Let’s Encrypt were willing to attempt building and maintaining some filtering process in spite of these things, they realized that they aren’t the only certificate authority in town. Even if they were wildly successful in addressing all the concerns discussed above, the best possible result is that a criminal enterprise would simply pay to get a certificate from a less-strict certificate authority. That is, if you want to ensure valid certificate are issued to sites you believe are safe, then you have to control the distribution of every certificate. Many, many other certificate authorities exist, so this is simply infeasible.

The article concludes with this:

You’d think that when it becomes well documented that criminals are obtaining and using that green padlock, it would undermine the whole purpose of getting people to trust them.

But this is the world of cybersecurity, and so you would be wrong.

The mistake made here represents the primary challenge with computer security: How do you explain subtle, unintuitive signals about security to non-technical people? Worse, how do you do so in a way that is future proof? I’ve seen some people complain that Violet Blue, the author of the piece in question, must have some other reason for posting this because she is normally tech savvy. She may simply have been mislead by bad messaging or marketing.

Some certificate authorities offer so-called Extended Validation Certificates, which do attempt to address the first and second concerns mentioned earlier. These certificates will print the “Owner” in green lettering next to the green padlock on all major browsers. The site you’re currently reading this on uses a certificate from Let’s Encrypt, so you will just see a green padlock if you look up. However, virtually all major financial institutions have Extended Validation certificates. So the website for, say, Chase proclaims that it is owned by “JP Morgan Chase and Co. (US)” next to the padlock.

Extended validation certificates are controversial, primarily because of the fourth concern mentioned above. Official procedures exist to provide a standard of care when issuing them, but no one ensures every certificate authority complies with these procedures. If one certificate authority won’t approve you, go to a less scrupulous or more easily fooled certificate authority to get your certificate. Computer security researchers have shown that people don’t notice the green-lettered “Owner” field nor do they know how it is different from a simple green padlock. Even in the best case scenario, extended validation certificates only ensure that the owner or operator of the site is the legal entity outlined in the green-lettered “Owner” tag. And not every site uses them. Wells Fargo, for example, doesn’t. The truly pessimistic security expert views extended validation certificates as a mechanism for certificate authorities facing cheap (or free, if you use Let’s Encrypt) competition to make up for lost revenue by rubber stamping an ownership claim and charging an arm and a leg.

This takes us back to the marketing message. Certificate authorities describe certificates generically as the way that people can trust that your site is legitimate. They completely gloss over the nuance described in this post. Extended validation certificates are the only certificates that provide any measure of protection against phishing, if you can call verification of a legal entity as the owner of a site protection against phishing. But “phishing protection” is often marketed as a reason to get a certificate.

I can see how Violet Blue would be confused.