Comcast Admits Broadband Usage Caps Are A Cash Grab, Not An Engineering Necessity

from the whoops-a-daisy dept

For years the broadband industry tried to claim that they were imposing usage caps because of network congestion. In reality they've long lusted after usage caps for two simple reasons: they allow ISPs to charge more money for the same product, and they help cushion traditional TV revenues from the ongoing assault from Internet video. Instead of admitting that, big ISPs have tried to argue that caps are about "fairness," or that they're essential lest the Internet collapse from uncontrolled congestion (remember the debunked Exaflood?).

Over the years, data has shown that caps aren't really an effective way to target network congestion anyway, can hinder innovation, hurt competitors, and usually only wind up confusing consumers, many of whom aren't even sure what a gigabyte is. Eventually, even cable lobbyists had to admit broadband caps weren't really about congestion, even though they still cling to the false narrative that layering steep rate hikes and overage fees on top of already-expensive flat-rate pricing is somehow about "fairness."

Comcast is of course slowly but surely expanding usage caps into its least competitive markets. More recently the company has tried to deny it even has caps, instead insisting these limits are "data thresholds" or "flexible data consumption plans." But when asked last week why Comcast's caps in these markets remain so low in proportion to rising Comcast speeds (and prices), Comcast engineer and vice president of Internet services Jason Livingood candidly admitted on Twitter that the decision to impose caps was a business one, not one dictated by network engineering:
Jason's not the first engineer to admit that caps aren't an engineering issue and therefore don't have anything to do with congestion. In fact if you followed the broadband industry's bunk Exaflood claims over the last decade, you probably noticed that ISP lobbyists say one thing (largely to scare legislators or the press into supporting bad policy), while actual engineers say something starkly different.

Repeatedly we've been told by ISP lobbyists and lawyers that if ISPs don't get "X" (no net neutrality rules, deregulation, more subsidies, the right to impose arbitrary new tolls, whatever), the Internet will choke on itself and grind to a halt. In contrast, the actual people building and maintaining these networks have stated time and time again that nearly all congestion issues can be resolved with modest upgrades and intelligent engineering. The congestion bogeyman is a useful idiot, but he's constructed largely of bullshit and brainless ballast.

Livingood will likely receive a scolding for wandering off script. Comcast, unsurprisingly, doesn't much want to talk about the comment further:
"We've asked Comcast officials if there are any technology benefits from imposing the caps or technology reasons for the specific limits chosen but haven't heard back yet. Livingood's statement probably won't come as any surprise to critics of data caps who argue that the limits raise prices and prevent people from making full use of the Internet without actually preventing congestion."
That's worth remembering the next time Comcast tries to insist that its attempt to charge more for the same service is based on engineering necessity. The problem? Our shiny new net neutrality rules don't really cover or restrict usage caps, even in instances when they're clearly being used to simply take advantage of less competitive markets. While Tom Wheeler did give Verizon a wrist slap last year for using the congestion bogeyman and throttling to simply make an extra buck, the FCC has generally been quiet on the implementation (and abuse) of usage caps specifically and high broadband prices in general.

There are some indications that the FCC is watching usage caps carefully, and says it will tackle complaints about them on a "case by case basis." But what that means from an agency that has traditionally treated caps as "creative" pricing isn't clear. It's another example of how our net neutrality rules were good, but serious competition in the U.S. broadband sector would have been better.

Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 17 Aug 2015 @ 5:14pm

    Re: Caps and Speeds

    The problem is that people don't realize about network congestion, which is sometimes out of the providers control. So congestion could be at the end-user, something Jim Getty is well noted for evangelizing, though he's not discounting other factors (Buffer bloat). It could also be at the endpoint, server issues, bgp issues, et al. Or it could also be at interconnection points between service providers, peering links. If I asked you personally to test from 50 different network points on various providers to find the bottleneck would you be able to do that? The Internet, a network of networks, is often a complex beast to troubleshoot a -> b issues, when a -> x -> y -> z -> et al before it hits b, and even then the ingress path could be different from the egress path, so the need for a bunch of different monitors.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Use markdown for basic formatting. HTML is no longer supported.
  Save me a cookie
Follow Techdirt
Techdirt Gear
Show Now: Takedown
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.