DailyDirt: The Ever-Growing Growth Of Data…
from the urls-we-dig-up dept
There are a lot of reasons to be optimistic about the future. Some folks will always predict doom and gloom, but we say, “The Sky Is Rising!” (loud and proud — and again with sequel The Sky Is Rising 2). The advent of digital information has created an enormous wealth of data, and the amount of this digital awesomeness seems to be growing all the time. Here are just a few more examples of the amazing abundance of media that surrounds us.
- The Internet Archive has updated its Wayback Machine, indexing 5 petabytes of internet goodness, covering the web from 1996 to December 2012. That data is from over 240,000,000,000 URLs, and this virtual backup of the web doesn’t even touch sites that have a login or a robot.txt file that blocks the Wayback Machine. [url]
- Sandvine’s global internet phenomena report contains a prediction that US internet traffic may rise to over 700,000 exabytes per year by 2019. And if Netflix continues to do well (accounting for about double the amount of traffic as YouTube and crushing Amazon Video, Hulu and HBO Go), a lot of that traffic will be people watching streaming movies and TV shows (legitimately, too, not just using BitTorrent). [url]
- Every minute of the day, more and more data is generated. For example, 571 new websites per minute, 100K+ tweets per minute… gazillions of infographics and bazillions of random factoids. [url]
- Since the beginning of time until 2003, humans generated about 5 billion gigabytes of data… and now we generate that much every 2 days. And that rate is accelerating (but humans are not exclusively generating all that data). [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post.
Filed Under: data, factoids, internet archive, media, sky is rising, wayback machine
Companies: sandvine
Comments on “DailyDirt: The Ever-Growing Growth Of Data…”
I would find the prospect enlightening if I didn’t suspect that the majority of that new information isn’t being generated by humans.
The Internet Archive...
I don’t know if they’ve changed it, but the Internet Archive used to go too far in obeying the robots.txt files. I once tried to access the archive of a site that no longer existed and was told that the site had been blocked due to a robots file. I was sure this was a mistake as the site was very simple and open while it was up. I was told that whoever owned the domain now, and who had put up one of those parking pages, had probably included a standard robots.txt file.
In other words; Person puts up site. Internet Archive makes copy of site. Site goes bust. New owner puts up generic site with robots.txt file. IA see robots.txt file and disables access to existing backup of old site.
When I asked if they couldn’t manually override this for sites that are obviously not the same anymore, I was told that it was impossible.
Data Generation
Now how about telling us of that data we actually understand, or today even utilize? I seem to recall that number was extremely small (like a single digit percentage).
The problem is we have all this data but most of it is locked away so that it can’t be used by the masses, poorly organized so that even if one has access you can’t find the data you need, and much of the data is inaccurate or incomplete.
So we are nothing more than a bunch of pack rats!
Way to go!
[quote] and this virtual backup of the web doesn’t even touch sites that have a login or a robot.txt file that blocks the Wayback Machine.[/quote]
Nice. Now people who didn’t know you could do that will start.
Don’t give it away, people!
generated data incorrect, maybe
“Since the beginning of time until 2003, humans generated about 5 billion gigabytes of data… and now we generate that much every 2 days. And that rate is accelerating (but humans are not exclusively generating all that data).”
but.. how much of that is redundant data?
I see the same posts on multiple sites, and news churns through thousands of news sites, and millions of blogs..
retweets by the really giant bucket, the same movie 50 times per torrent site, the same song in a billion drop boxes, etc.
so, removing the echos, how much data is actually generated?
I wondered how come Data started looking older and heavier.