by Mike Masnick
Tue, Dec 3rd 2013 7:53pm
by Mike Masnick
Fri, Mar 29th 2013 7:39pm
from the we've-got-it-all-wrong dept
The cloud was supposed to free us, not lock us in
"Cloud computing" went by a variety of other terms in the past before this marketing term stuck, but the key part of it was that it was supposed to free us of worrying about the location of our data. Rather than having to have things stored locally, the data could be anywhere, and we could access it via any machine or device. That sort of happened, and there definitely are benefits to data being stored in the cloud, rather than locally. But... what came with today's "cloud" was a totally different kind of lock: a lock to the service.
I can point many apps to data stored locally
I wrote something related to this a few years ago, concerning music in the cloud. If I have a bunch of MP3s stored locally, I can point any number of music apps at my music folder, and they can all play that music. As long as the data is not in a proprietary format, I can find the app that works best for me and the data is separate from the app. Even when you have proprietary formats like Microsoft's .doc, other apps can often make use of them as well -- so, for example, I can get by with Libre Office, and I don't lose access to all of my old Microsoft Word docs.
This is really useful, because it helps us avoid vendor lock-in in many cases. Even when, say, Microsoft or Apple dominates the market. It's still possible to come in and be compatible. The competition then focuses on building better services, rather than reinventing the data model. That's much more useful to consumers, because the innovation is focused on making their lives better, rather than reinventing the wheel.
Today's cloud brings us back to walled gardens
For the most part, today, however, "cloud" applications bundle the storage and the service as one, and the two are linked inseparably. You check your data into a new cloud service, but the application layer and the data are both held by the same company. Yes, you can often transfer data from one service to the other -- such as Google's "data liberation front" effort, which is fantastic (and goes beyond many other companies' efforts), but just the fact that data needs to be liberated suggests we're taking the wrong approach altogether. Rather than having to "export" all of your feeds from Google Reader and then waiting patiently for 50,000 other people who are trying to upload them to the few small Reader competitors out there, why shouldn't we have each had an OPML file stored somewhere that we control, and that we could easily point any reader application, whether its local or "in the cloud." And, yes, there are some services that attempt to do this, but it's not where the whole "cloud" space has gone.
Separate and liberate the data from the infrastructure
What the cloud should be about is both freeing us from being locked to local data, and also freeing us from having that data locked to a particular service. I should be able to keep my data in one spot and then access it via a variety of cloud clients -- and the clients and the data shouldn't necessarily be directly connected or held by the same party. If I don't want to listen to my music via one app, I can just connect a different app to my personal data cloud and off we go. If Google Reader shuts down, no problem, just point a different app at my RSS data. No extraction, no uploading. Just go.
There are, of course, plenty of players around which sort of do this. DropBox, Box, Amazon's S3 and even Google Drive are setting themselves up as personal data clouds, and there are a growing number of apps that run across them. Projects like the Locker Project are thinking about how we store personal data separated from apps as well. And I know there are a bunch of other projects either around today or quickly approaching release, that also seek to do something in this space.
But, for the most part, all of the stories that people talk about concerning "cloud" computing almost always involve services that tie together the app and the data and all you're really doing is trading the former limitations of local data for the limitations of a single service provider controlling your data. Many service providers want this, of course. It's a form of lock-in. Plus, having some sort of access to your data and your usage can enable them to do other things, such as more accurately data mine you and your usage.
But, as users, we really should be pushing more towards embracing the apps that separate the app from the data and that let you point their "cloud" app at any particular place you store your "cloud" data. Some of this may involve standardizing certain data formats, but that makes sense anyway, as, once again, that's an area where there are tremendous benefits to not reinventing the wheel, so that the innovation and competition can focus on the service level. While some vendors may fear losing lock-in, if they truly believe in their own ability to provide great services, it shouldn't be a problem. At the same time, they should also realize that embracing this kind of world means that it's easier for others to jump in and test their services as well.
The death of Google Reader raised a lot of issues around trust, and while you could "export" the data, that process is still messy and archaic when you think about it. The future of cloud computing should be much more focused on separating the data from the service. That would remove the fear that many are now talking about concerning adopting new cloud services that might not last very long. If the data is stored elsewhere, and entirely in the control of the user, then you don't need to trust the service provider nearly as much, but can dip in and test out different apps operating on the same data, and switch with ease.
If we're going to see the real promise of "the cloud" take place, that's where things need to head. We should be increasingly skeptical of "cloud" apps that also control the data.
by Michael Ho
Mon, Aug 27th 2012 5:00pm
from the urls-we-dig-up dept
- Printing at about 100,000 dots per inch in full color has been achieved by researchers at the Agency for Science, Technology and Research (A*STAR) in Singapore. As a proof of principle, a test image of Lenna was formed on a silicon wafer covered with a nanoscale metal coating. [url]
- Amazon is starting to offer archival storage for just $0.01 per gigabyte per month. This Glacier storage service is aimed at replacing old tape archives and geographically distinct facilities, but retrieving the data isn't so convenient: data retrieval requests can take hours (hence the name Glacier) and there's also a retrieval fee after you've accessed more than 5% of your data vault in a month. [url]
by Michael Ho
Mon, Aug 20th 2012 5:00pm
from the urls-we-dig-up dept
- Digitally-stored information about nuclear waste needs to be accessible many thousands of years from now. Engraving the info on sapphire discs with platinum is just one proposed solution that could work for future archaeologists -- but in what language should it be written? [url]
- If you thought your burned CDs/DVDs lasted forever, think again. But if you still want to store your data on plastic discs, there's a company (Millenniata) that sells an optical disc engraving technology for writing CDs/DVDs that work with standard CD/DVD readers -- and that claims to last for about 1,000 years (or at least hundreds of years). [url]
- Neanderthals were making cave paintings over 40,800 years ago in Spain. How much of our art will survive the next 40,000 years? [url]
by Glyn Moody
Tue, Jun 5th 2012 8:59pm
from the how-not-to-win-friends dept
Techflaws alerts us to an announcement by ZPÜ, the organization responsible for setting the levy on storage media in Germany, that fees will rise rather significantly (German original). For a USB stick with a capacity greater than 4 Gbytes, the tax would increase from 8 eurocents (about 10 cents) to 1.56 euros (about $1.93), a rise of 1850%; for a memory card bigger than 4 Gbytes, the fee would go up from 8 eurocents to 1.95 euros (about $2.42), an increase of 2338%.
No justification for such a huge jump was offered, but since one of the constituent members of ZPÜ is the German music collection society GEMA, which seems to have an unlimited sense of entitlement when it comes to demanding money from the public, that's hardly a surprise.
In particular, no rationale is given for including memory cards, which are used almost exclusively in cameras to record content produced by end-users -- so the idea that the levy is somehow justified as a way of compensating creators for revenue supposedly "lost" by piracy is manifestly absurd.
Basically, this outdated and insulting approach treats all Germans using digital storage as if they were pirates. Of course, arbitrarily imposing 2000% tax hikes on storage is probably the quickest way to turn them into something much more dangerous to GEMA and its friends: ardent supporters of the German Pirate Party....
by Glyn Moody
Fri, May 25th 2012 5:31pm
from the jukebox-of-alexandria dept
Most people will be familiar with Moore's Law, usually stated in the form that processing power doubles every two years (or 18 months in some versions.) But just as important are the equivalent compound gains for storage and connectivity speeds, sometimes known as Kryder's Law and Nielsen's Law respectively.
To see why, consider that the IBM PC XT had a 10 Mbyte hard drive when it was launched in 1983, which meant you couldn't even fit a single song on it. Similarly, the first widely-used modem, the 1981 Hayes Smartmodem, had a maximum speed of 300 baud: to transfer a digitized song using a dial-up connection would have taken around 500 hours.
With those kind of figures, it's easy to see why the recording industry underestimated the threat that file sharing would become once the Internet arrived: based on the past, it was almost inconceivable that people would ever swap music between computers. Of course, once that did start to happen, and the shape of the future became obvious to many, the industry nonetheless wilfully ignored the facts and the trends at every turn, when instead it should have taken the lead in re-inventing media for the Internet age.
That woeful history of refusing to accept the implications of rapidly-advancing technologies makes this prediction, found via Slashdot, even more fateful:
Technologies that will make it possible to expand disk density include heat-assisted magnetic recording (HAMR), which Seagate patented in 2006. Seagate has already said it will be able to produce a 60TB 3.5-in. hard drive by 2016.
Assuming Seagate or someone else delivers, that 60 terabyte hard disk could store around 10 million typical MP3 files. A year ago, Spotify was said to have 15 million tracks, which means that you could store most of today's Spotify on that future Seagate drive. Spotify is likely to grow even larger by 2016, but it probably won't grow as fast as the storage capacity of hard disks, so there will be some point in the not-too-distant future when you can place all of its holdings on a single hard disk: Spotify in a box.
Obviously, few people will choose to do that, but storing your favorite million songs will not only be realistic, it will be cheap -- and even portable. Provided the transfer rate to and from such disks also keeps up with the growth in capacities -- an indispensable technological requirement, otherwise they become impossible to use -- this means that people will be able to move around huge collections of music, without ever touching an Internet connection. That makes all those three-strikes plans moot, since you won't actually need your broadband line in order to swap files with friends. You'll just plug in your portable hard drives to a common computer and exchange stuff directly (as probably already happens with today's terabyte-sized portable disks.)
In an ideal world, we would also see a kind of constant scaling of the intelligence of the recording industry, such that by 2016 it would finally accept that trying to stop sharing -- whether online or off -- is simply pointless. Somehow, though, I think we'll just have to make do with the other variants of Moore's Law.
by Mike Masnick
Tue, Apr 10th 2012 3:45pm
from the step2-startups dept
by Mike Masnick
Tue, Aug 30th 2011 7:55am
from the lame dept
But where this gets more interesting is that it appears Verizon is simply lying about the reasons why. The company is telling users it's for "security" reasons. But... while it's discontinuing FTP for its regular subscribers, those who pay up for a higher level hosting plan (starting at $5.95 per month) seem to still be able to use FTP. In other words, it's only a security problem if you're not paying -- suggesting that the "security" is more about Verizon's revenue than the security of your content. And while it's true that unencrypted FTP can have some security issues (mainly on untrusted networks), there are ways to deal with that with secure, encrypted FTP offerings.
by Mike Masnick
Tue, Jun 21st 2011 7:20am
from the not-what-it-was-designed-for dept
The latest example of a company trying to abuse the law this way is... Microsoft. We've been following this story for a while. Back in 2009, Microsoft suddenly announced that it would break third party memory cards for the Xbox, basically because it could. This pissed off a lot of people, and kicked off an antitrust lawsuit from Datel one of the third party makers of such cards. That case is now moving forward, with Microsoft arguing there's no antitrust issue, because its merely blocking Datel and others because they're violating the DMCA's anti-circumvention clause, in that third party cards have to get past some software used to block them.
If I had to guess, I'd say Microsoft is going to lose this case. It seems that courts are seeing through attempts to abuse the DMCA when it comes to stopping hardware competition. That's not the case when it comes to software, where things get murkier, but this seems like a pretty obvious attempt by Microsoft to abuse the intent and language of the DMCA solely to stop third party competition of a physical product. Hopefully, the court recognizes this.
by Mike Masnick
Fri, May 20th 2011 12:54pm
from the of-course-they-do dept
But, of course, technologically speaking, the actions of these systems can just as easily be used to share unauthorized content in a potentially infringing manner, and it appears that this is what the RIAA is targeting. As Eriq Gardner notes at the link above, it's not at all clear what the RIAA intends to do with the information it gets. It's difficult to see how it could sue Box.net, who almost certainly has no real liability here, but it could go after the users -- something we'd thought the RIAA had sworn off for the time being.
The whole thing just seems like a waste of time. This is what computers do. They copy. There's always a way to copy. Pretending you can stop that isn't rational. What would be rational is helping the RIAA member labels adapt, but for whatever reason, that just doesn't appear to be within the RIAA's skillset.