What Bandwidth Crunch?
from the doesn't-look-so-bad-this-way dept
While you have lobbyists, consultants and politicians claiming that the internet is on the verge of collapsing due to running out of bandwidth, it seems that the techies would beg to differ. We already pointed out that the report put out by D&T consultants was later refuted by the folks who run the nodes that D&T insisted were at risk of being overwhelmed. Now we have Andrew Odlyzko adding more weight to the idea that the bandwidth crunch that so many lobbyists and telco execs seem to be screaming about is something of a myth. Odlyzko, of course, is also the guy who pointed out that Worldcom was lying back during the dot com boom when it insisted that internet traffic was doubling every 100 days. Now he’s noting that internet traffic growth is slowing — which very few of the doomsday estimates take into account. Internet traffic is still growing, of course, but not at nearly as rapid a pace. That isn’t surprising, after all. The internet is starting to reach maturity in terms of the number of folks who are joining in the developed world. Certainly, those people are using more and more bandwidth, but not at an overwhelming pace, and there appears to be plenty of capacity to keep up with the pace of growth. Once again, it looks like those warning of the imminent death of the internet are basing it either on faulty data or are extrapolating based on data that doesn’t take into account slowing growth. Either way, it is interesting that the actual technologists never seem all that worried about running out of capacity. Hell, even when any telco exec breaks the party line and admits that the threat of a bandwidth crunch is completely overstated, it’s always been the CTO who says so. Somehow, I get the feeling that the technology guys have a lot better handle on this than the lobbyists and the politicians…
Filed Under: bandwidth, internet growth, net neutrality
Comments on “What Bandwidth Crunch?”
I work for a telco hardware vendor. We make the PONs and racks and all that good stuff that lets the internet work.
I can tell you that our current generation runs very much below the infrastructural capacity of the network.
Truth be told? It’ll be just fine. The reason the telcos are claiming the internet is about to collapse is because, well, they get more funding to “beef up” the infrastructure that way. I don’t mind, because it means we sell more hardware, and the customers aren’t as price sensitive because they’re not spending their own money.
Make no mistake, this is about getting the federal government to cover the cost of upgrades, and has nothing to do with the limits of the system.
Isn’t that how it always is? Since when have politicians and lobbyists been payed to have common sense and use their brains? We all know they run on greed and propaganda.
Your emphasis is wrong, Kyros.
It’s the corporations that want governemnt money – they are the ones who run on greed and propaganda. The politicians are just bought and paid for, that’s all.
The problem is that people making these theories are trying to view the “internet” as a single pipe. In reality, the more the internet spreads to include more people, the more pipes are created. Sure, some key areas have a lot more traffic then others, but I can not see how we are about to hit a “maximum bandwidth”… whatever that may mean.
spam, porn and piracy
Given that most of the bit traffic on the internet is either spam, porn, pirated copyright material or bad home videos from youtube, one might well ask the politicians why they are so keen increase the capacity of the thing.
Re: spam, porn and piracy
Because not everyone is a right wing nut case against showing so much flesh as an ankle?
Sounds like they borrowed the “Experts” for this study from the Global Warming camp.
“The Man” doesn’t believe in global warming, and he makes fun of other people’s choice of experts.
I feel an educational imperative coming on.
Please visit the Newsweek link below, where you can read the truth behind the global warming deniers – if you really want to.
Back on topic now – thanks for reading.
Re: Re: Re:
So the average temperature has raised .4 degrees fairenheit in the last 90 years. Break out the shorts because its hot.
Remember guys, the internet is a series of tubes. We are in serious danger of one of those tubes getting plugged up.
Dirty little secret
What everyone seems to forget or not realize is that the data throughput depends mostly on the server supplying the data. Not every website can afford to have a fat pipe connection – traffic costs money. Some data centers allow for bandwidth throttling as well, per server/user.
I can routinely download stuff from the ftp server at sourceforge at speeds around 1000Kbytes/sec, so a 30MB file takes only 30 seconds (Cox Cable 6Mbit connection). But even an Akamai server sometimes seems choked or throttled compared to that.
Some data centers advertise “unlimited” bandwidth but it’s a half truth. It means unlimited total bytes/month but says nothing about the data rate. Trust me – it’s throttled.
As streaming media demands increase, you have more clients demanding high quality low-adaptive traffic. Current Quality of Service (QoS) measures in place don’t provide an acceptable congestion response mechanism for best-effort jitter-sensitive traffic. An important consideration in the IP backbone provisioning problem is the fact that excess capacity is arguably the best known solution for CBR congestive regimes.
So, we may have a well oiled over-provisioned Internet today, but with uTube traffic taking up 10% of Internet traffic as of this year and the acceptable quality for media ever-increasing you can envision the need to continue adding capacity.
think it is a ploy
to raise prices on the internet,I started with direcway, and bandwith was limited to 400mb in a 4 hour time frame,now hughes bought them out and it is a paltry 200mb per 24 hours for the same monthly charge talk about getting a good screwing,and their service is much worse then direcway was time for the gov to regulate these guys for sure
Backbone congestion? No. To the home congestion? Maybe. Cablevision can’t deliver anything more than 7mbs (as measured by their own site) even though its supposed to be “up to 15mbs. Thats of course when their signal doesn’t drop, which it does quite often.
12 nailed it. It’s not about traffic on the backbone, it’s about traffic on the local node, below where the fiber terminates in the neighborhood.
In a hybrid fiber coax cable system the important number is the size of the local node. Bandwidth within a node is shared. A node with 100 homes past will be pretty fast, a node with 2000 homes past will not.
MSO’s want to make (and have been making) nodes as large as possible, upgrading cable systems and subdividing nodes is very expensive and requires rolling a lot of trucks.
passed, not past. Duh.
Mike this is an interesting subject that needs a lot more fact based discussion. Mr Odlyzko and his colleagues have undertaken some useful analysis but unfortunately they had to use an incomplete set of data.
They took traffic statistics from the public internet exchanges. That is public data and shows how much traffic is exchanged through the public fabric that those exchanges provide.
What that misses is the traffic exchanged through private interconnects directly between large network operators. Even though those interconnects may take place in the public exchange facilities (the trend however is to exchange traffic away from public exchanges) the traffic exchanged is not visible in any public data.
All of Level 3’s traffic is exchanged with other carriers through private interconnect agreements (formerly referred to as peering) and Level 3 is not unique in that regard. We estimate that more than 50% of all US and European traffic is carried by four large networks – nearly all of those interconnects will also be private in nature.
The upshot is that the work you referred to is a snapshot and it might not be representative of the entire Internet. We believe that it isn’t.
Our traffic (and that means bits paid for by customers on a 95th percentile basis) has grown at around 100% per annum for the last few years. This year we are currently on target to do that again – as measured from end December 2006 to end December 2007.
Looking at the ports we have with our largest interconnect partners also gives us a view of our growth relative to them. This is imprecise as it is only a sample and doesn’t give a complete view of other networks. However, it would appear that most other large networks are also growing at rates higher than the Odlyzko study.
As to your points about plenty of capacity. There simply isn’t capacity lying around to be consumed. There is plenty of fiber in the ground but that fiber has to be lit and then an IP layer has to be built on top of that. All of that takes a great deal of capital. When you add the same capacity in a year that you added in all previous years of your existence that clearly is an ever increasing number. The question really should be are the network operators committed to that expenditure and are they happy to commit that capital to their Internet product when they have other competing demands for capital?
Level 3 has engineered the network carefully. We have high margins and our IP related revenue continues to grow. It is a good business for us and one we remain committed to.
One final point; These volumes of network traffic have started to exceed the design capabilities of certain types of network architectures. If you search through the NANOG deliberations you will find details of these “brickwalls” discussed by other network operators.
Level 3’s R&D in this space has enabled us to develop and deploy unique solutions to these architectural constraints. We have patented many of these over the last few years. We have, and will continue to, remain well ahead of the curve so that we can grow our network to meet the demands of our customers and the growth imposed by the wider Internet.
So the debate in this area should focus on those three things;
How do we get a true independant view of the growth of the Internet?
Are network operators committed to deploy capital for their Internet networks?
What are the design constraints imposed when running IP networks over certain architectures at enormous scale, and how best should they be overcome?
series of tubes