Six Lines

Network Neutrality, Again

Posted by Aaron Massey on 11 May 2017.

The FCC is, once again, accepting comments on Network Neutrality policy. Perhaps the best place to begin is with John Oliver’s recent segment on the issue:

This was actually Oliver’s second piece about this. His first piece famously ended up crashing the FCC’s public comment system. That’s probably all that most people remember about the first time he made network neutrality the focus of the show, but he did something else that’s perhaps more important: he defined network neutrality1 for the vast majority of Americans. His definition is essentially that “all data must be treated equally,” and it’s been adopted by other popular advocates. I want to spend some time on his definition from that segment because it’s problematic and doesn’t capture the whole picture.

Colloquially, we can identify a few reasons the definition is problematic. The most obvious reason it doesn’t work is malware. Do you really want your ISP legally obligated to treat malware as legitimate network traffic? I don’t, so I’m already treating some data differently. Also, consider that end user ISPs aren’t the only players in this game. Google, Facebook, Amazon, and other huge tech companies spend a ton of money ensuring that their data is delivered as fast as possible. They are building local caches, some even co-located with ISPs, that ensure their traffic will, all other things being equal, arrive faster than the garage-based startup that network neutrality is theoretically protecting. Many if not most of the perceived benefits of ISP-based network neutrality wouldn’t become reality because we’re not enforcing neutrality across every other part of the Internet, just the last hop from the ISP to the home.

But really, the biggest problem with this understanding of network neutrality is answering the following: “How do you measure ‘equal’?” There are three options:

  1. Content Objectivity: Should we regulate content on the Internet?
  2. Cost: Should we just ask customers to pay “by the byte” for service?
  3. Processing: Should we force all data to be processed in exactly the same way?

Each of these is problematic, with arguments in favor and against their adoption. And, of course, you could argue for any two or even all three of these as part of your own understanding of “treat all data equally.”

Content objectivity is already regulated on the Internet. Whatever its merits, the Children’s Online Privacy Protection Act (COPPA) is clearly regulating some forms of content. It’s not, however, an ISP regulation, and protecting children may not make sense to do at the ISP-level. Other things, like malware, do make sense. The UK is a bit more aggressive about ISP-level content filtering, so it produces some good examples to consider (e.g., pornography, extremist material). Again, the question is more properly stated, can we agree on some things that no citizen should be able to access from their ISP? There may not be much to add to that list, but adding anything makes this an exercise in line drawing rather than absolutes. If we add something to this list that shouldn’t have been blocked, then we’re effectively censoring the Internet at the ISP level. Drawing this sort of distinction isn’t easy. For example, should copyright enforcement take the form of an ISP filter? This is currently done in the UK.

The second part of a possible network neutrality definition is cost. Currently, the vast majority of Internet providers bill based on flat monthly fees. Due in part to this, customers have no clue how much data they use through their home Internet connections. Some people stream hours of video content on a daily basis. Others basically just surf the web for emails, news, and billing. Thus, from the perspective of an ISP, one customer is far more demanding than the other.

Many, if not most ISPs, have examined how and whether to “treat all data equally” by switching to metered billing. We could require metered billing by law, but most customers don’t want this. Network technologies have grown in efficiency dramatically in the last 20-25 years. Even though the average customer are using more data the network has kept pace using a relatively fixed the billing model. If we switched, the “savings” of network improvements would be accrued to the ISP first and to the customer only through market competition, which is laughably small in most parts of the United States.

Finally, should we consider equality of process as a part of our definition of network neutrality? Network technologies don’t currently do this. TCP packets and UDP packets are handled differently. So I suppose the more “reasonable” form of this argument is “Should we process all data equally according to current network standards?” This too falls down rather quickly. Network traffic isn’t static. It’s changing all the time.

Network traffic actually changes based on the time of day. When Europe is waking up, most of the east coast of North America is asleep. Using network resources to deliver Internet traffic from the UK to Spain may actually be cheaper by going across the Atlantic ocean and back at that time. A similar situation occurs in reverse during the evening on the east coast of the US. And none of this mentions Software Defined Networking, which promises to make network management even more efficient by making it even more dynamic.

So what should we make of network neutrality? Clearly, some things that ISPs have done, like throttling speeds for competitors or not counting services owned by their parent companies against bandwidth caps, are troublesome practices. I haven’t spent time highlighting the problems with having absolutely no regulation simply because others, like John Oliver, have done a good job of describing them. But the proposed solution of “treating all data equally” is probably worse than just letting ISPs and their business partners do whatever they want. Ideally, I would like to see a more nuanced approach to regulating ISPs that balances both consumer and ISP concerns regarding fairness, cost, and processing.

  1. Technically, he popularized a troubled definition from Tim Wu. And Wu’s definition ultimately goes back to Saltzer, Reed, and Clark’s paper about the end-to-end principle of network architecture.