While the Associated Press has talked
a lot about adapting to a new internet-centric world, there's still very little evidence that it's doing anything different. It's still trying to act like a gatekeeper rather than an enabler. However, it appears that Reuters is actually experimenting with something interesting. It has a new project, called OpenCalais
, designed to help any information provider extract useful metadata from written content
. In other words, it's an automated system that you can run an article or a blog post through, and it will return useful data in a structured manner. For example, if you wrote an article about Google's earning report, it would note that the article was about Google, that it had to do with an earnings report, and maybe connect some important other points. The idea, then, is that the more useful semantic data that's there, the more useful things that can be done on top of it. For those who believe that better use of semantic data is the key opportunity
for newspapers to jump to the internet age, this could represent a very big deal. Of course, there's a very big "if" in that statement. The service actually needs to work well and be useful. It also needs to attract users. There's a bit of a chicken-and-egg problem here, as the really useful apps built on top of that data won't come unless the data itself is available. Having Reuters behind the project suggests a strong initial base of content, but it remains to be seen how much adoption can actually be driven through this system. Some of it may depend on how much in the way of resources Reuters has put behind this project to jumpstart it (and whether that commitment continues after Reuter's acquisition by Thomson Financial closes). Either way, it's an experiment worth following, and one a lot more interesting than simply demanding that people pay more money.